-1.6 C
New York
Thursday, February 2, 2023

Colliding Communities, Cloud Native, and Telecommunications Standards – InfoQ.com

QCon London (March 27-29, 2023): Adopt the right emerging trends to solve your complex engineering challenges. Register Now.
Facilitating the Spread of Knowledge and Innovation in Professional Software Development


Kevlin Henney takes a look at six specific impossible things that shape the limits of what people can develop, from integer representation to the minefield of task estimation and prioritization.
Matthew Clark, Paul Caporn take a look at versioning, design patterns, handling different use-cases, supporting high-traffic moments, and the merits of different API types.
Sara Bergman introduces the field of green software engineering, showing options to estimate the carbon footprint and discussing ideas on how to make Machine Learning greener.
Effective nearshore staff augmentation—a savvier form of outsourcing—allows organizations to bypass the skills gap and mitigate turnover. Nearshoring can help companies build great teams of professionals who are located in the same time zone, charge reasonable rates, and collaborate directly with in-house teams.
When selecting a no-code/low-code platform, six key fitnesses should be examined: purpose fit, cost fit, ops fit, user fit, use-case fit, and organization fit. The IT team should be heavily involved in this decision as they play a pivotal role in helping citizen developers with platform adoption.
Adopt the right emerging trends to solve your complex engineering challenges. Register Now.
Your monthly guide to all the topics, technologies and techniques that every professional needs to know about. Subscribe for free.
InfoQ Homepage Articles Colliding Communities, Cloud Native, and Telecommunications Standards
Dec 07, 2022 24 min read
by
W. Watson
reviewed by
Steef-Jan Wiggers

What happens when an ecosystem driven from the bottom up collides with a community characterized by top-down development? The 5g broadband cellular network standard by the 3rd Generation Partnership Project (3GPP), the Network Function Virtualization (NFV) standard by the European Telecommunications Standards Institute (ETSI), and the Service Function Chain RFC (request for comments) by the Internet Engineering Task Force (IETF) are examples of the telecommunication community’s methods for creating standards. These are usually developed in a coarse-grained, top-down fashion with complementary features intertwined and specified. The cloud native community takes a bottom up approach driven by the ecosystem’s needs and demonstrated by fine-grained implementations, which then produce the specifications. An example of this would be the OpenMetrics standard that emerged from the observability and Prometheus cloud native community. This contrast in standards development has made integrating the desired cloud native forms of availability, resilience, scalability, and interoperability into telecommunications a difficult process.
One of the main reasons agile development has superseded big up-front design, also known as the waterfall method, is that within agile, the customer has a tighter feedback loop with the implementers. Implementation-driven feedback loops expose deliverables earlier to the customer, so adjustments happen quicker.
Big and expensive projects like government rewrites 1 have slowly been replaced with agile methods, but has this affected how we develop standards? 
Data Reliability Delivered. Data breaks. We ensure your team is the first to know and the first to solve with end-to-end data observability.
There are two types of standard design: top down and bottom up. Historically, standards bodies are assembled with people from academia, industry, and corporate sources. The industry would then adopt the standard based on the buy-in from the major players who contributed to the standard, the marketing of the standard, and the validity of the standard itself. Control of the standard in this model flows from the top down.
Software standards that emerge organically do so from code bases that get adopted by the community. The code bases that need to be interchangeable (via libraries), interoperable (via APIs), or need to communicate well (via protocols), gain an advantage from being developed, scrutinized, and adopted in the open. 
Whether top down or bottom up, after considering the effectiveness of the standard, the next most important thing about a standard is where the control lies. The control of the standard in this model flows from the bottom up based on user adoption and fork threats. In the bottom-up style, the ecosystem has the last say. This is not to say that bottom-up control can’t overreach. Bottom-up control can actually be more powerful than top down, as we will discuss later. 
Telecommunications protocols and standards are usually top down and are developed by standards bodies like 3GPP, ETSI, and IETF. Cloud native standards are bottom up and emerge from subcommunities such as the observability community.
Top-down and bottom-up design or control contrasts are depicted in many places. One such place is within the Agile methodology’s pull versus push methods. In the push method, the tasks are assigned, top down, from the project manager to the implementer. In the pull method, the tasks are developed and prioritized by the team, put in a queue, and then implementers select the tasks from the queue. Another fact worth noting is that design researchers believe a significant portion of user design is from the bottom up 2. Christopher Alexandre, the architect that the pattern movement in software is based on, also noted that the implementers often choose architecture (e.g., farmers building a barn) and is merely facilitated 3 by architects. These architects help best by providing a pattern language. An even stronger position on architecture is by Peter Kropotkin, who said that the great craftsmanship of the European cities 4 was not “commissioned” by a solitary effort 5 but rather developed by the guilds in competition with each other, an argument he uses as an example of the process of mutual aid.
The avoidance of picking winners and losers is reflected in the CNCF technical oversight committee (TOC)’s principles as:
No kingmakers. Similar or competitive projects are not excluded for reasons of overlap.
In an environment with no kingmaker, competing solutions and implementations are allowed regardless of the political influence of the vendor. This is because vendor favoritism kills innovation, especially within the open-source community. While in the traditional business world, creating barriers to entry is considered a good practice, in the open-source community, barriers to entry are frowned upon. Open source communities reflect the marketplace of ideas, where every idea deserves a fair shot on a level playing field.
There is a fairness component to the design of standards as well. In the book Design Justice, Sasha Costanza-Chock reveals how the credit for design is often misattributed in a top-down hierarchy 6. Attribution is an important factor in how the incentives for open-source work 7. The misattribution of attribution and attention is destructive to open-source communities 8. Since attention and attribution are both rival resources (i.e. a resource that when allocated to one person, another person is excluded), the gratuitous reassignment of such a resource disincentivizes innovation. Reassignment of a resource that was developed openly in a market setting is called playing market 9 and has been shown to have negative consequences on innovation10.
A CNCF TOC stance is that there shouldn’t be architects commissioned to develop the masterpiece standard at the head of an open-source community. The power in the community is that it isn’t centrally planned but project-centric:
We Are Project-Centric. Principle: If it can be on a modern public source code control system, then it can be a project. And we put projects front and center.
So the project developers are educators of their implementation pattern that facilitates and solves a cloud native problem.
An argument can be made that programming languages are the most reusable software11. If this is the case, programming language design is foundational to all software, and the design thereof could be illuminating. 
When the programming language Ada was developed, it was commissioned in a top-down manner from the U.S. Department of Defense 12 to be an international standard. Ada is an example of the kingmaker’s decree (the DoD) coming to fruition some 11 years13 after developing a standard.14  This could be compared to another general-purpose language, C++, which came into existence later, was developed more organically, albeit commercially, and overtook Ada in popularity. Initially, C++ was more of a de facto standard 15 than a de jure standard like Ada. Interestingly, both of these languages struggled to adopt the crown of the dominant object-oriented language. C++ was object oriented earlier but wasn’t an ISO standard. Ada became the first internationally standardized language that was object oriented in 1995. Still, the C++ enjoyed so much commercial adoption by that time that the crown race was already won, even with the kingmaker decree of Ada happening five years earlier.
The 1990s brought open source implementation-based languages. One of the major differences in the adoption of these languages is that the interpreter or compiler was open source, and the implementation (usually built by the founder) was used as the standard. Other implementations were judged by how they compared with the original implementation. Based on the implementation, the de facto standard would be debated within their respective language communities with the founder of the language but with one key difference: the community now can fork the source and provide a credible threat to the benevolent dictator’s decisions. 
The development of HTTP is another example of a code implementation driving a standard. The code for the Apache and Netscape web servers was passed around so that HTTP behavior could be shared between the different developers 16. The architecture of HTTP 1.0, combined with other web architecture, is arguably the world’s largest distributed application, and it wasn’t standardized until 1996.
What can we draw from the bottom up, open, and community-driven implementations? First, open-source implementations are not patent driven. Everyone must be able to use, modify, and contribute to the code for it to be community influenced and driven. Second, early adoption trumps codification. Passing around code is the primary form of communication in the early phase of successful, widely used standards.
When we reason about cloud native implementation principles, de facto standards, and patterns, we should describe what a pattern actually is. To start, events, or recurring activities, are the driving factor for patterns. Events, when harnessed or facilitated, domesticate an environment for users. In this way, a pattern is a recurring series of facilitated steps. This goes for both physical and digital architecture. Multiple patterns, or ways to facilitate recurring events, can be part of an interconnecting ubiquitous pattern language 17
An example of recurring events within the cloud native architecture is the numerous logs for the numerous nodes and containers within a deployment. A way to facilitate these logs is to treat them as event streams. Routing log data to stdout allows tools in the execution environment to capture them and acts as a best practice. This behavior is a cloud native pattern, a best practice, and accepted by the cloud native community, regardless if a standards body codifies the practice. Implementing this pattern within the projects of the cloud native community is all that is needed. Instead of a top-down decree, the cloud native pattern is a preferred form of exchange in community discourse.
Describing the cloud native community’s implementation of interfaces as a ubiquitous language is useful because natural languages are driven from the bottom up. We will return to this later.
During the browser wars of the late ’90s spanning into the 2000s, the closed-source Internet Explorer competed with multiple open source HTML browsers such as Firefox, Safari, Chrome, and Opera (which switched to being based on Chromium in 2013). The fierce competition incentivized differentiation in the form of proprietary extensions, which repeatedly threatened to balkanize the HTML standard and browser world. Reusing open standards implemented in libraries such as the WebKit rendering engine used by Safari and Chrome proved to be a superior strategy in browser implementation. By 2020 even Microsoft’s new browser, Edge, was based on the open-source engine in Chromium. 
With open implementations and standards, free use incentivizes widespread usage, creating network effects. This is probably the most pragmatic rationale for open implementations and standards. 
Because of the openness, there are more people critiquing not only the design of the standard but the implementation of the standard. This creates the aura of legitimacy for the standard that emerges from the code base. The standard seems more thoroughly tested because one can see how it was implemented. Furthermore, when a standard is implemented openly, such as with an Apache, BSD, or GPL license, it is much harder to embrace, extend, and extinguish.
Previously, we established that the bottom-up community is composed of projects that communicate with one another through a ubiquitous language of patterns. What is the nature of these patterns, and how are they implemented? The cloud native patterns, such as the reconciler pattern, are implemented as interfaces exposed from open-source libraries. The interfaces between popular upstream/producer projects and downstream/consumer projects incentivize competing producer projects to “standardize” on rough de facto agreements that become a common interface. This is to say that if multiple implementations of upstream-producing projects are encouraged, the best interface should emerge. The CNCF TOC states it this way:
No single stack. Encourage interoperability for the emergence of a variety of stacks and patterns to serve the community and adopters.
The implementation of interfaces is presented by the community at conferences. This encourages participation in the project’s development. Again, the proverb “given enough eyeballs, all bugs are shallow” by Eric S. Raymond comes to bear here.
One thing to beware of is that interfaces can still take a form of control that is either too slow to compensate for user demands or too manipulative and therefore kills competition. This is why the CNCF TOC shuns hard, codified standards developed from standard bodies:
Principle: Promote interfaces and de facto implementations over standards for real-world use. … We want markets and users to drive interop, not committees. We want to help real-world use happen faster, and foster collaboration. We do not wish to become gated on committees.
Standards documentation can still be used as a form of communication between producers or producers and consumers and can even be in the form of an RFC. Still, this documentation isn’t recognized formally outside the projects: 
CNCF may develop written materials in the style of the current CNI interface document or the style of an IETF RFC, for example. These CNCF “specification” materials are not “standards.” It is [possible in the future] that an independent and recognized international standards body takes a CNCF document as “upstream” and evolves it into a standard via (e.g.) the IETF process. The CNCF is morally supportive of independent parties doing this but does not see this work as its own responsibility.
Some examples of these rough, de facto implementations are CNI (network), CSI (storage), CRI (runtime), OpenMetrics, and CLI (logging).
Finally, within the cloud native community, the reconciliation pattern is valued as a way for the infrastructure to communicate with applications hosted by that infrastructure. If the project supports the infrastructure of applications, it is exceedingly important that the interface to that project supports declarative configuration instead of imperative steps toward a correct state. This supports communication because the declarative configuration is easier to reason about. 20
Robert Chisholm identified a problem one runs into in an attempt to create a set of rules. The problem is that you need yet another set of rules for judging the validity, usefulness, and capability of those rules for producing outcomes and truth claims. This creates an infinite regress and is known as the problem of the criterion.
When reasoning about standards, we have to decide whether the top-down development of rules is more desirable than bottom up. From the standpoint of pure pragmatism, the bottom-up development wins if you desire buy-in. Some of the hardest, seemingly intractable problems have been solved by bottom-up methods. Elinor Olstrom discovered that the longest-lasting institutions (defined as a set of rules), which have been in place for 500+ years, emerge from the bottom up with no centralized planner. In the book Convention, David Lewis proposed that languages develop from the bottom up based on a game theoretical concept known as focal points, which are not decided upon by any committee 19. On the other hand, top-down rules production seems to win if control is the main criterion at first glance, but even this may not be the case. 
Top-down standards development prefers a command and control, decentralized power structure reminiscent of what Melvin Conway talks about in his paper “How do committees invent.” Applied to our topic, instead of software being constructed in the fashion of the organization that develops it, here it is the standards body that prefers to develop standards that structure power in the way it was organized. DNS is a perfect example. DNS is structured in a decentralized manner 20, just like the top-down body that created it. It is decentralized in the sense that the power is structured in a hierarchy of servers, each with a reduced level of authority. In contrast, a distributed standard like TCP-IP has no centers of control 21. Conway says that given that all large organizations have sub-organizations, there will be interfaces between those sub-organizations by which they communicate. The sub-organizations are constrained by the parent organization on what responsibilities they have. These interfaces and constraints are the preferred sites of architectural choice within not just software but any artifact of the parent organization. Since the artifact of standards bodies are protocols and specifications, the constraints and interests that comprise the sub-organizations will dominate the design of how power is implemented within the protocol itself. Given this view of organizations and the desire for control, it seems that top-down design would be prone to implement standards that are closed source (with power in the hands of one implementer), and bottom up would prefer an open standard (with power being in the hands of the people capable of implementing a successful fork). It also seems that top-down standards bodies would prefer decentralized standards like DNS (if the decentralized nodes represent the interests in the standards body), even if open, over TCP-IP, which is distributed 22. Given this extension of Conway’s law, bottom-up, implementation-driven efforts have organizational structures that are the more likely of the two to produce protocols and specifications with distributed control.
For those that desire more control, the counterintuitive truth of a distributed standard is that the level of control is actually elevated because each node has full buy-in to the standard’s power structure. This is why the bottom-up emergence of a standard like OpenMetrics is so much more subversive. The legitimacy of the standard gets much more buy-in here. With a bottom-up standard, there is heightened control over both the standard and what the standard produces (often a distributed implementation). Bottom-up standards are like speed bumps. The community adopts them because they think the adoption is in their own interest, not unlike how a driver slows down when they see a speed bump in their own interest, not in the interest of the lawmaker 23.
We have described the rationale for the cloud native, bottom-up development of principles, best practices, and de facto standards. We have also described top-down, committee-driven standards such as those from the telecommunications industry. Is it possible for the two worlds of top-down and bottom-up standards development to meet in the middle? What happens when the stewards of protocol meet with cloud native standards? What happens is probably something like governance bodies with technical oversight committees, special interest groups, and working groups. It probably looks something like the emergence of the architecture of the web, which was implementated years before standards bodies codified the standard. 
To learn more about cloud native principles, join the CNCF’s cloud native network function working group. For information on CNCF’s CNF certification program, which verifies cloud native best practices in your network function, see here.
1. ​“The IRS conceded yesterday that it had spent $4 billion developing modern computer systems that a top official said ‘do not work in the real world,’ and proposed contracting out the processing of paper tax returns filed by individuals.” 
2. “In his text Democratizing Innovation, MIT management professor Eric Von Hippel both theoretically and empirically demonstrates that a significant portion of innovation is actually done by users, rather than manufacturers. Further, he finds that particular kinds of users (lead users) are the most likely to innovate and that their innovations are more likely to be attractive to a broader set of users.” Costanza-Chock, Sasha. Design Justice (Information Policy) (p. 111). MIT Press. Kindle Edition. 
3. The Timeless Way of Building describes the fundamental nature of the task of making towns and buildings. It is shown there, that towns and buildings will not be able to become alive, unless they are made by all the people in society, and unless these people share a common pattern language, within which to make these buildings, and unless this common pattern language is alive itself.", Alexander, Christopher. A Pattern Language (Center for Environmental Structure Series) (p. ix). Oxford University Press. Kindle Edition. 
4. “The results of that new move which mankind made in the mediæval city were immense. At the beginning of the eleventh century the towns of Europe were small clusters of miserable huts, adorned but with low clumsy churches, the builders of which hardly knew how to make an arch; the arts, mostly consisting of some weaving and forging, were in their infancy; learning was found in but a few monasteries. Three hundred and fifty years later, the very face of Europe had been changed. The land was dotted with rich cities, surrounded by immense thick walls which were embellished by towers and gates, each of them a work of art in itself. The cathedrals, conceived in a grand style and profusely decorated, lifted their bell towers to the skies, displaying a purity of form and a boldness of imagination which we now vainly strive to attain.” Kropotkin, Peter. Mutual Aid: A Factor in Evolution (Annotated) (The Kropotkin Collection Book 2) (p. 114). Affordable Classics Limited. Kindle Edition. 
5. “A cathedral or a communal house symbolized the grandeur of an organism of which every mason and stone-cutter was the builder, and a mediæval building appears — not as a solitary effort to which thousands of slaves would have contributed the share assigned them by one person’s imagination; all the city contributed to it.” Kropotkin, Peter. Mutual Aid: A Factor in Evolution (Annotated) (The Kropotkin Collection Book 2) (p. 115). Affordable Classics Limited. Kindle Edition. 
6. “The typical capitalist firm is arranged in a pyramid structure so that resources (time, energy, credit, money) flow from bottom to top. This is also the case within most design firms. At the extreme, in large multinational design enterprises, armies of poorly paid underlings labor to produce work (concepts, sketches, prototypes), while the benefits (money, attribution, copyrights, and patents) flow upward into the hands of a small number of high-profile professional designers at the top.” Costanza-Chock, Sasha. Design Justice (Information Policy) (p. 113). MIT Press. Kindle Edition. 
7. “When people talk about the ‘attention economy,’ they’re usually referring to the consumer’s limited attention, as when multiple apps compete for a user’s time. But a producer’s limited attention is just as important to consider.” Eghbal, Nadia. Working in Public: The Making and Maintenance of Open Source Software (p. 214). Stripe Press. Kindle Edition. 
8. “The production of open source code, however, functions more like a commons—meaning that it is non-excludable and rivalrous—where attention is the rivalrous resource. Maintainers can’t stop users from bidding for their attention, but their attention can be depleted.”, Eghbal, Nadia. Working in Public: The Making and Maintenance of Open Source Software (pp. 161-162). Stripe Press. Kindle Edition. 
9. “…concentration of ultimate decision-making rights and responsibilities (i.e., ownership) in the hands of a central planning board,” Strategy, economic organizations, and the knowledge economy, Nicolai J Foss, pg. 174
10. “First, because managers could always be overruled by the planning authorities, they were not likely to take a long view, notably in their investment decisions. Second, because managers were not the ultimate owners, they were not the full residual claimants of their decisions and, hence, would not make efficient decisions. …The problem arises from the fact that it is hard for the ruler to commit to a noninterference policy,” Strategy, economic organizations, and the knowledge economy, Nicolai J Foss, pg. 175
11. “Bottom-up programming means writing a program as a series of layers, each of which serves as a language for the one above. This approach tends to yield smaller, more flexible programs. It’s also the best route to that holy grail, reusability. A language is, by definition, reusable. The more of your application you can push down into a language for writing that type of application, the more of your software will be reusable.” Graham, Paul. Hackers & Painters: Big Ideas from the Computer Age (Kindle Locations 2563-2566). O’Reilly Media. Kindle Edition. 
12. “The Ada standard was approved February 17, 1983. At that time, it became ANSI/MIL-STD 1815A-1983. Subsequently, it became an International Standards Organization (ISO) standard as well. Interpretations of the standard are made by an international committee that was originally under the auspices of the U.S. Department of Defense and called the Language Maintenance Committee, but now falls under the jurisdiction of ISO and is called Ada Rapporteur Group.” Weiderman, Nelson. A Comparison of ADA 83 and C++. 
13. “The price of making Ada a real, rather than hollow, standard has already been paid. Since the draft standard was released in 1980, it has taken 11 years of considerable effort to institute the technology.” Weiderman, Nelson. A Comparison of ADA 83 and C++. 
14. “Public Law 101-511, Section 8092 prescribes, ‘Notwithstanding any other provisions of law, after June 1, 1991, where cost effective, all Department of Defense software shall be written in the programming language Ada in the absence of special exemption by an official designated by the Secretary of Defense.’” Weiderman, Nelson. A Comparison of ADA 83 and C++.
15. “Commercial de facto standards such as C++ have the advantages of widespread visibility and acceptance. The marketplace moves rapidly to ensure that C++ can work with other software systems. C++ is ahead of Ada in this regard. Much commercial investment has been made in the infrastructure for tools and training.” Weiderman, Nelson. A Comparison of ADA 83 and C++.
16. “The early Web architecture was based on solid principles—separation of concerns, simplicity, and generality—but lacked an architectural description and rationale. The design was based on a set of informal hypertext notes 14, two early papers oriented towards the user community 12, 13, and archived discussions on the Web developer community mailing list (www-talk@info.cern.ch). In reality, however, the only true description of the early Web architecture was found within the implementations of libwww (the CERN protocol library for clients and servers), Mosaic (the NCSA browser client), and an assortment of other implementations that interoperated with them.” Fielding, Roy. (2000). Architectural Styles and the Design of Network-based Software Architectures. Pg. 90
17. “The elements of this language are entities called patterns. Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.” Alexander, Christopher. A Pattern Language (Center for Environmental Structure Series) (p. x). Oxford University Press. Kindle Edition. 
18. “Declarative configuration is different from imperative configuration, where you simply take a series of actions (e.g., apt-get install foo) to modify the world. Years of production experience have taught us that maintaining a written record of the system’s desired state leads to a more manageable, reliable system. Declarative configuration enables numerous advantages, including code review for configurations as well as documenting the current state of the world for distributed teams. Additionally, it is the basis for all of the self-healing behaviors in Kubernetes that keep applications running without user action.” Hightower, Kelsey; Burns, Brendan; Beda, Joe. Kubernetes: Up and Running: Dive into the Future of Infrastructure (Kindle Locations 892-896). Kindle Edition.
19. “Conventions are agreements—but did we ever agree with one another to abide by stipulated rules in our use of language? We did not. If our ancestors did, how should that concern us, who have forgotten? In any case. the conventions of language could not possibly have originated by agreement, since some of them would have been needed to provide the rudimentary language in which the first agreement was made.” David Lewis. Convention: A Philosophical Study (Kindle Locations 58-61). Kindle Edition. 
20. “Because the DNS system is structured like an inverted tree, each branch of the tree holds absolute control over everything below it.” Galloway, Alexander R. Protocol (Leonardo) (Kindle Locations 540-541). MIT Press. Kindle Edition. 
21. “On the one hand, TCP/IP (Transmission Control Protocol/Internet Protocol) enables the Internet to create horizontal distributions of information from one computer to another. On the other hand, the DNS (Domain Name System) vertically stratifies that horizontal logic through a set of regulatory bodies that manage Internet addresses and names. Understanding these two dynamics in the Internet means understanding the essential ambivalence in the way that power functions in control societies.” Galloway, Alexander R. Protocol (Leonardo) (Kindle Locations 189-193). MIT Press. Kindle Edition. 
22. “Distributed networks are native to Deleuze’s control societies. Each point in a distributed network is neither a central hub nor a satellite node—there are neither trunks nor leaves. The network contains nothing but ‘intelligent end-point systems that are self-deterministic, allowing each end-point system to communicate with any host it chooses.’ 15 Like the rhizome, each node in a distributed network may establish direct communication with another node, without having to appeal to a hierarchical intermediary.” Galloway, Alexander R. Protocol (Leonardo) (Kindle Locations 581-585). MIT Press. Kindle Edition. 
23. “With bumps, the driver wants to drive more slowly. With bumps, it becomes a virtue to drive slowly. But with police presence, driving slowly can never be more than coerced behavior. Thus, the signage appeals to the mind, while protocol always appeals to the body. The protocol is not a superego (like the police); instead, it always operates at the level of desire, at the level of ‘what we want.’” Galloway, Alexander R. Protocol (Leonardo) (Kindle Locations 4880-4884). MIT Press. Kindle Edition. 

Becoming an editor for InfoQ was one of the best decisions of my career. It has challenged me and helped me grow in so many ways. We’d love to have more people join our team.

A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

We protect your privacy.
You need to Register an InfoQ account or or login to post comments. But there’s so much more behind being registered.
Get the most out of the InfoQ experience.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

We protect your privacy.
Real-world technical talks. No product pitches.
Practical ideas to inspire you and your team.
March 27-29, 2023. Attend in-person or online.

QCon London brings together the world’s most innovative senior software engineers across multiple domains to share their real-world implementation of emerging trends and practices.
Level-up on 15 major software and leadership topics including Modern Frontend Development and Architecture, Enhancing Developer Productivity and Experience, Remote and Hybrid Work, Debugging Production, AI/ML Trends, Data Engineering Innovations, Architecture in 2025, and more.
SAVE YOUR SPOT NOW
InfoQ.com and all content copyright © 2006-2022 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we’ve ever worked with.
Privacy Notice, Terms And Conditions, Cookie Policy

source

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles