Red Hat on Edge Complexity


RHEL OS, Red Hat Enterprise Linux operating system commercial market sales logo, icon, sticker on a laptop keyboard.
Image: Tomasz/Adobe Stock

Edge is complex. Once we get past the harrowing monstrosity and reality of understanding this fundamental statement, perhaps we can start building frameworks, architectures, and services around the task at hand. Last year’s State Of The Edge report from the Linux Foundation put it succinctly: “The edge, with all its complexity, has become a rapidly evolving, powerful, and demanding industry.”

Red Hat seems stoically appreciative of the complex role of edge management that awaits any organization now moving their IT stacks to this realm. The company sees edge computing as an opportunity to extend “the open hybrid cloud” to all data sources and end users that populate our planet.

Red Hat points to edge endpoints as diverse as those on the International Space Station and your local neighborhood pharmacy, and now aims to clarify and validate the parts of its own platform that address specific edge workload challenges deal with

On the bleeding edge of the ledge

The mission is, while the edge and cloud are tightly coupled, we must enable computing decisions outside of the data center, at the cutting edge of technology.

“Enterprises are looking at edge computing as a way to optimize performance, cost, and efficiency to support a variety of use cases across industries ranging from smart city infrastructure, patient monitoring, gaming, and everything in between,” said Erica Langhi, Senior Solution Architect at Red Hat.

WATCH: Don’t Control Your Enthusiasm: Trends and Challenges in Edge Computing (TechRepublic)

The concept of edge computing clearly represents a new way of looking at where and how information is accessed and processed to create faster, more reliable and more secure applications. Langhi points out that while many software application developers may be familiar with the concept of decentralization in the broader network sense of the term, there are two key considerations that an edge developer should focus on.

“The first concerns data consistency,” Langhi said. “The more distributed edge data is, the more consistent it needs to be. When multiple users try to access or change the same data at the same time, everything needs to be synchronized. Edge developers need to think about messaging and data streaming capabilities as a powerful foundation to support data consistency for building edge-native data transport, data aggregation, and integrated edge application services.”

READ:  American Future Fuel Acquires Prospective Red Basin Uranium

Edge’s sparse requirements

This need to highlight the intricacies of edge environments stems from the fact that this is a different type of data processing – there is no customer offering their document and user interface preferences for “requirement specifications” – at this level we work with more granular ones Machine-Level Technology Constructs.

The second key consideration for edge developers is security and governance.

“Operating across a large data plane means the attack surface is now expanding beyond the data center with data at rest and in motion,” Langhi explained. “Edge developers can apply encryption techniques to protect data in these scenarios. With network complexity increasing as thousands of sensors or devices are connected, edge developers should seek to implement automated, consistent, scalable, and policy-driven network configurations to support security.”

Finally, she says, by choosing an immutable operating system, developers can enforce a reduced attack surface, helping organizations deal with security threats in an efficient manner.

But what really changes the game from traditional software development to edge infrastructures for developers is the diversity of target devices and their integrity. This is how Markus Eisele sees it in his role as developer strategist at Red Hat.

“Whereas developers typically think of frameworks and architects think of APIs and how to put everything back together, a distributed system with computing units at the edge requires a different approach,” says Eisele.

What is needed is a comprehensive and secure supply chain. This starts with integrated development environments — Eisele and his team point to Red Hat OpenShift Dev Spaces, a zero-configuration development environment that uses Kubernetes and containers — hosted on secured infrastructures to help developers build binaries for a variety of target platforms and compute units .

binaries on the base

“Ideally, the automation at work here goes far beyond successful compilation to tested and signed binaries on verified base images,” Eisele said. “These scenarios can become very challenging from a governance perspective, but need to be repeatable for developers and minimally invasive to the inner and outer loop cycles. Although not much changes at first glance, there is even less room for error. Especially when you think about the security of the generated artifacts and how it all comes together and still allows the developers to be productive.”

READ:  Penrith Panthers are kings of the jungle

Eisele’s inner and outer loop references pay homage to the complexity at work here. The inner loop is a single developer workflow where code can be quickly tested and modified. The outer loop is the point where code is committed to a version control system or part of a software pipeline closer to the point of production deployment. For further clarification, we can also recall that the notion of software artifacts mentioned above denotes the full range of elements that a developer can use and/or create to create code. So this could include documentation and annotation notes, data models, databases, other forms of reference material, and the source code itself.

SEE: Hire Kit: Backend Developer (TechRepublic Premium)

What we do know for sure is that unlike data centers and the cloud, which have been around for decades, edge architectures are still evolving at an exponentially charged rate.

parry expediency

“The design decisions that architects and developers make today will have a lasting impact on tomorrow’s capabilities,” said Ishu Verma, technical evangelist for edge computing at Red Hat. “Some edge requirements are unique to each industry, but it is important that design decisions are not made specifically for the edge, as this can limit an organization’s future agility and scalability.”

Edge-centric Red Hat engineers insist that a better approach is to build solutions that can work on any infrastructure — cloud, on-premises, and edge — and across industries. The consensus here appears to be heavily focused on the choice of technologies such as containers, Kubernetes, and lightweight application services that can help create future-ready agility.

“Common elements of edge applications across multiple use cases include modularity, separation, and immutability, which makes containers a good fit,” verma. “Applications need to be deployed at many different edge levels, each with their unique resource characteristics. Combined with microservices, containers that represent instances of functions can be scaled up or down depending on underlying resources or conditions to meet customers’ needs at the edge.”

READ:  Hotel opening in Portland’s East End, as office building nears completion next door

Edge, but on scale

All of these challenges are ahead of us. But while the message is don’t panic, the task is complicated when we need to create software application development for edge environments that can safely scale. Edge at scale brings with it the challenge of managing thousands of edge endpoints deployed in many different locations.

“Interoperability is key to edge at scale, as it requires the same application to run anywhere without adapting it to a framework required by an infrastructure or cloud provider,” said Salim Khodri, edge go-to-market -Specialist for EMEA at Red Hut.

Khodri makes his comments consistent with the fact that developers want to know how to take advantage of the edge without changing the way they develop, deploy, and maintain applications. That means they want to understand how to accelerate the adoption of edge computing and combat the complexities of a distributed deployment by using their existing skills to make the experience of programming at the edge as consistent as possible.

“Consistent tools and best practices for modern application development, including CI/CD pipeline integration, open APIs, and Kubernetes-native tools, can help address these challenges,” explained Khodri. “This is to provide the portability and interoperability capabilities of edge applications in a multivendor environment along with application lifecycle management processes and tools at the distributed edge.”

It would be difficult to list the most important pieces of advice here on one page. Two would be challenging and might require the use of some toes as well. The keywords might be open systems, containers and microservices, configuration, automation and of course data.

The decentralized edge could emanate from data center DNA and consistently maintain its close relationship with the cloud-native IT stack backbone, but this is an essentially separate pairing of relationships.



Source link