3.4 C
New York
Tuesday, November 29, 2022

This Week in Programming: Honeycomb's ARM Advantage – The New Stack – thenewstack.io

This week at the AWS Summit in New York, we got a fascinating glimpse into how Honeycomb.io helps engineers debug their systems through the use of big data.
Honeycomb’s service is “differentiated by its scale and speed,” explained Liz Fong-Jones, Honeycomb principal developer advocate, during the Summit keynote.
The goal with Honeycomb’s service is to have any engineer answer any question about their malfunctioning or under functioning system within 10 seconds or less — even previously unasked questions that come through iterating a train of thought, or, to  “Follow the breadcrumbs,” as explained in her keynote breakout talk.
The secret sauce? The o11y company collects all the operational data it can from the client, stores it on AWS solid state drives then uses a combination of the AWS Lambda serverless service, and speedy AWS Graviton ARM-based processors to parse the data and return the queries.
The Honeycomb service draws on a variety of AWS pre-packaged analysis services. Some of it is already captured on internal AWS services, including the Relational Database Service and CloudWatch.
James Webb Space Telescope’s image of the galaxy cluster SMACS 0723. Each point of light represents an entire galaxy. (NASA)
But also instrumental is the Amazon distribution of OpenTelemetry, the Cloud Native Computing Foundation‘s open source package of APIs, libraries, and agents to monitor applications through distributed traces and metrics.
Honeycomb pre-processes all the application-generated data and stores it in Amazon Simple Storage Service (S3), where it is then analyzed on the fly through the AWS Lambda serverless service. The service currently processes 2.5 million trace spans a second — up from 200,00 just three years ago. “Our customers are asking 10 times as many questions about 10 times as much data,” Fong-Jones said.
It’s a pretty impressive setup for the work of only 50 engineers. The setup consists of a combination of stateful and stateless services, built mostly of GoLang, but some Java and Node.js thrown in as well.
For stateless services, Honeycomb uses the Amazon Elastic Kubernetes service running on both EC2 C6g Graviton2 and C7g Graviton3 instances.
Honeycomb appears to be bullish on the ARM architecture.
Fong-Jones noted that when the company saw a 10% improvement in median latency, when switching to Graviton 2 from the AWS M5 Intel Xeon-based instances. “the Graviton 2 processor is just much more efficient, and we’re able to push much more load,” she said.
Moreover, A/B tests between Graviton2 and Graviton3 found a further 10% to 20% improvement in tail latency, and a 30% improvement in our throughput and median latency. And the CPU utilization is about 30% lower, “which means we can push it a lot harder,” she said.
Honeycomb saves a bit of coin by using AWS Spot instances, which are those machines not already being used within AWS. AWS has a graceful termination handler that exits out workloads when the processors are needed elsewhere. Here, Honeycomb initially saved about 20% by moving some workloads to spot.
For Kafka streaming data ingest, Honeycomb uses EC2 Im4g instances, which are based in Nitro solid state drives. Earlier, slower, storage iterations left the CPU starved for work. “Right-sizing everything onto Im4g lets us hit our network CPU and storage thresholds appropriately,” she said.
Lambda provides another piece of the puzzle. Even using 100 speedy Graviton instances alone won’t entirely get the job done, given the millions of files stored on S3. This is where Lambda comes in, able to instantly provide up to “10s of thousands of parallel workers.”
“With AWS Lambda and Graviton combined together, we see about a 40% improvement in price performance,” she said.
As someone who’s new to the whole observability space I gotta say, traces make way the hell more sense to me than all the different kinds of ways to generate and aggregate metrics.
— Phillip Carter (@_cartermp) July 14, 2022

omg, tada.
ARM is inevitable. It is the future. It’s now on every major cloud provider – Amazon, Microsoft, Google, and Oracle.
It’s in your laptop if you’re using a modern Mac.
Welcome to the future, everyone. https://t.co/tjywueZLuB
— Liz Fong-Jones (方禮真) (@lizthegrey) July 13, 2022


I’m sorry I missed the political and civil servant history of James Webb who played a pervasive role in homophobic discrimination, helping set historical policy to remove/ban LGBTQ people from federal gov. This naming harms the amazing #JWST contributors https://t.co/jA7pRjaSgM
— Jennifer Riggins💙💛 (@jkriggins) July 15, 2022

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Honeycomb.io.

source

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles