Continuous Glucose Monitors: Does Better Accuracy Mean Better Glycemic Control?
Accuracy is good, but precision is essential.
2026 Update: Dexcom’s new G7-15day is DIFFERENT from the old G7 that this article discusses. But it’s still worth reading!
As of January, 2026, Dexcom announced they are discontinuing the Dexcom G6 and are encouraging users to move to the new G7-15day. Given that the old G7 readings were highly jumpy (noisy) and difficult for both humans and AID algorithms to interpret, discontinuing the G6 was highly concerning. However, Dexcom has since updated the G7 algorithm so it now reports readings virtually identical to the G6.
I’ve been wearing the new G7-15day for about a month now (Feb 2026), and am very happy with it. I will be writing a new article about it someday, but for now, readers are still highly encouraged to read this article. While it does discuss the old G7’s problems, the main point of the article is to explain important factors about CGMs and T1D management. The G6/G7 comparison is more to illustrate these other points, which are more critical to T1D management. Namely,
How to better understand blood glucose levels, how it’s measured, and why it lies at the heart of T1D management.
“Accuracy” is not the main goal of a CGM. It’s “precision”, and that matters a lot.
Effective dosing decisions is not based solely on glucose values, but on trends and patterns. This article explains these techniques.
Introduction
Pop quiz: Your CGM displays a glucose level of 120 mg/dL, but a finger-stick blood glucose meter (BGM) displays 180 mg/dL. What do you do next?
Take insulin because your “real” blood sugar is high.
Calibrate your CGM because the BGM is “more accurate”.
Ignore it because you don’t really understand how all this works anyway.
Sorry to say, it’s a trick question. To actually know what to do, and what not to do, you need to understand a lot more about how glucose actually moves around your body.
Dosing decisions depend on a lot of variables, most of which cannot be measured in your body. These include your metabolic activity, the rate of hepatic (liver) glucose production, the actual amount of insulin onboard (not just an estimate), food absorption variability, cortisol levels (an insulin antagonist), and the multitude of ways glucose can be absorbed without insulin mediation (e.g., your brain, and muscle contraction). So, when you look at your CGM number, it’s a tiny blip on all the variables that need to be known before you—or an algorithm—can make an informed dosing decision.
Suffice to say, this is a stochastic problem, where the number of variables is not only large and unknown, but their interactions with one another amplifies the randomness. This is why humans and algorithms have an upper bound on how accurate they can ever be in glucose management.
Nevertheless, we have to make the best of what we can know. In other words, we can take educated guesses on the physiological reality might be for each of these variables. And glucose levels sits at the top of that schematic.
It is physically impossible to measure your body’s systemic glucose levels—that is, the total amount of glucose you have in your blood, tissues, organs, etc. And since glucose moves in and out of those compartments regularly—which is how we stay alive—the actual glucose “measure” is itself an estimate. Now, that estimate is critical to be sure, but the single number is not that useful.
What you really want to know is whether your glucose is rising or falling, and the rate of that change. And none of that can be determined in a single reading. The rate and direction of CGM readings is the only possible way to optimize dosing decisions, even with all the error built into this highly stochastic system.
Most people don’t really understand these concepts because of a single misunderstanding: Most believe that glucose in the body is evenly distributed, like colored dye in water. They assume that if you use a finger-prick glucose meter and it reports some number, that that number reflects (with some small margin of error) your whole body’s insulin level. And that’s not true.
This article aims to correct those misunderstandings, and then use knowledge about how glucose actually behaves in the body so you can be better at making in-the-moment decisions about dosing.
And this is most critical for those on an automated insulin pump: You need to know when the pump’s own algorithm might be interpreting CGM values too literally, putting you at risk.
With that, let’s start with basic facts:
Glucose in the body is not evenly distributed. It is distributed in different concentrations in different parts of the body—including different area of your blood—in order to assure that glucose is available in locations where it’s vitally important. This is particularly important during low glucose levels, but also when it’s too high and your body wants to clear it out.
Capillary blood glucose measurements—the finger-stick method—are not a proxy for systemic glucose levels. They never were, but had been erroneously assumed as such. Not only are they a very poor way to manage T1D (relative to CGM data), they can be deceptive in how one evaluates CGM performance or interpretation. This is one of THE most important physiological facts that must be understood.
Because glucose is not evenly distributed, and because it moves around a lot, managing T1D is not about glucose readings by themselves, which is a measure of “accuracy”. The value of a CGM is on not how well any of its readings compare with a blood glucose monitor, but on a sequence of adjacent readings that allow you to infer systemic glucose levels, trajectory of movement, and rate of change. That is not accuracy, but precision. We’ll get into more of that soon enough.
Interstitial fluids, which a CGM reads, is just another compartment in the body. Glucose moves into that tissue at different rates and times and locations that is not necessarily tied to systemic glucose levels. The so-called “lag effect” from blood, only happens in controlled lab settings, where a person is lying still, and glucose+insulin is infused in the body at different rates in order to evaluate a CGM’s “comparison” to capillary blood. Lag does NOT exist in most real-world conditions.
Lastly, once you consider all the factors above, there is great variability—and therefore, error—in how well a CGM can possibly perform. Today’s current technology is as good as it can possibly get, given the error ratios involved. Aiming (or hoping) for better is a fool’s errand.
Measurements like MARD, which compare the measurements from a CGM to capillary blood, are useless in the real world. They are strictly a marketing term, and there is a growing contingent of scientists who feel the FDA should remove MARD measurements from its standards, and the use of MARD from marketing materials.
Much of the details of this article—including the above statements—come from a variety of published articles in well-established, peer-reviewed medical journals. A leading paper on glucose is this rigorous investigation, Differences in venous, capillary and interstitial glucose concentrations.
The Pleus et al. (2023) study demonstrates that glucose concentrations measured in venous, capillary, and interstitial compartments diverge significantly after a glucose load, with median differences exceeding 30% at 60 and 120 minutes post-ingestion—even in non-diabetic subjects. These discrepancies are not merely technical artifacts, but reflect real physiological compartmentalization and time-lag effects, which are further compounded by device-specific calibration and signal processing.
In other words, the more volatile systemic glucose levels are—that is, your whole body’s glucose levels, not just anywhere in particular—the more variability you’re going to see in each of these compartments. This is why and when a BGM reading rarely matches a CGM.
Below is a snapshot from my own data that will be explored in detail later in this article. But for this context, the black circles are BGM readings from capillary glucose (finger stick). The blue plots are G6 data, and the red plots are from the G7. You can easily see that capillary glucose readings have virtually no bearing on systemic glucose values, and can be highly misleading if they are used for daily T1D management.
Note that before CGMs, T1Ds had no choice but to use capillary blood measurements, but this was never shown to have much clinical benefit, other than to address truly extreme circumstances, such as hypoglycemia and hyperglycemia. Good, to be sure, and better than urine testing that preceded it, but useless insofar as continuous readings from a CGM.
The true nature of T1D management, whether you make your own decisions, or rely on an automated pump, is the ability to identify and react to glucose patterns. The differences in CGMs rests entirely on how well they present trajectories and rates of change. It’s not MARD.
This is not just conjecture. The article, “Limits to the Evaluation of the Accuracy of Continuous Glucose Monitoring Systems by Clinical Trials”, describes several ways MARD can be manipulated or misinterpreted:
Distribution of glucose values matters enormously. CGM accuracy varies by glucose range (worst in hypoglycemia, better in euglycemia, often better still in hyperglycemia). So a study that enrolls more hyperglycemic readings will produce a flattering MARD. They propose a “Weighted MARD” (WMARD) to correct for this, but nobody uses it.
Reference device error gets baked in. If you’re comparing CGM to a fingerstick meter (not a lab-grade YSI analyzer), the meter’s own error inflates the apparent CGM error. As CGMs get more accurate, the reference device becomes the limiting factor—you might be measuring the meter’s inaccuracy more than the CGM’s.
Sample size creates massive uncertainty. They introduce a “MARD Reliability Index” showing that with typical study sizes, the confidence interval around MARD can be enormous.
Physiological time lag is unavoidable. Even a perfect CGM would show ~4% MARD just from the delay between blood and interstitial fluid glucose. Remember: These are in LAB conditions, where lag time DOES happen.
But let’s get back to the real-world here. All of the above is merely explaining why MARD is a useless metric to evaluate CGMs. Self-managing T1D does not require that level of “accuracy” difference. In fact, Medtronic’s Chief Medical Officer Robert Vigersky published a review titled The Myth of MARD in Diabetes Technology & Therapeutics makes an even more pointed argument:
MARD is not an FDA requirement. It’s not in the regulatory criteria for CGM clearance—the industry just adopted it as a marketing number.
No clinical evidence MARD differentiates safe from unsafe sensors. There’s no study proving that a CGM with 8% MARD produces better patient outcomes than one with 10% MARD.
Study design can swing MARD by several percentage points. Adding 15% more paired points in hypoglycemia changes MARD from 9.2% to 10.3%. Adding them in hyperglycemia drops it to 8.8%. Same sensor, different number.
AID systems compensate anyway. Real-world data shows that automated insulin delivery systems using sensors with MARDs of 9% (Dexcom G6) or 10% (Medtronic GS3) both achieve similar TIR and A1C outcomes.
If the only point of this article was to merely establish that MARD is useless, then we’d be done here. What really matters—and this is where we go beyond “accuracy” and into the true utility of a CGM—is where the art of T1D self-management lies: Learning to see—and react to—glucose patterns. That is the way to achieve healthy glycemic control.
You therefore need a CGM that produces good, reliable, actionable data. Noise is not actionable. This especially true for those who use automated insulin pumps.
“Accuracy” is more of a marketing term than a meaningful measure of a CGM’s utility.
Real-World Scenario
Let’s return to the chart at the top of this article, which I’ve copied below.
The behavior of the G7 (red) is purportedly “more accurate” than the G6 (blue), but is it more “actionable” at any given moment? If you were watching these values tick by, one by one, would you really know whether you should be ready to correct with insulin or carbs? Or would you have to wait longer before that pattern emerges? (At which point, it’s too late.)
No, this is not just a matter of the G6 “smoothing” the data, because each read is new, on its own. Prior data is not modified. Determining “precision” must take into account the fact that systemic glucose levels cannot possibly move in huge jumps, up or down. The G7’s data shows individual sample variation, but that kind of variation is not physiologically possible throughout your entire body.
Put that into context with the pop quiz at the beginning: A finger-prick blood glucose test shows 180 mg/dL, while the CGM shows 120. Which reading is more reliable if you were to take action?
If you were to treat the 180 value as “correct,” despite the fact that the CGM pattern (in either sensor) was not just elsewhere, but moving in a confirmed direction, this could be a bad judgment. Let’s say you administer 2u of insulin because that’s what you’d do for a “true” glucose reading of 180. But, if that was more of an anomaly due to the erratic nature of glucose movement, you’d likely be facing a severe hypo even in about a half hour. That’s a big deal, and it happens far more often than T1Ds are aware of.
Sadly, because health care providers are not aware of this nuanced physiology, and they’ve bought into the claim that the G7 is so accurate, it’s too easy for both the user and the HCP to believe that the user either made a bad call, or to just toss it up to T1D being a really hard disease to manage (expletives edited to avoid content warnings).
If you’re a T1D and learning to make in-the-moment management decisions—or rely on an automated insulin pump to read this data—knowing this is important.
Let’s try another quiz: Below is a familiar screenshot from the Dexcom G6 app:
The “down arrow” on this Dexcom reading indicates glucose levels are dropping. And yes, 75 is definitely low. What do you do?
If you think you should eat to avoid a hypo, that’s making a decision based on ONE number. And that’s never a good idea. To illustrate, let’s expand to see the whole chart now:
Now that you see the whole chart, you could well be right. The levels were quite high (180s), then dropped to roughly the 150s, and then suddenly took a dive in the last 30 minutes.
But wait. We can also see that it’s starting to level off. Yes, only ONE reading suggests this tapering off, similarly to how it leveled off in the 150s. Will it do it again? Will it pop back up? Will it continue downward? Granted, only an experienced T1D would know to consider this, and to be sure, it would be from years of experience (ahem). But an automated algorithm certainly would. Or should.
That’s why I ignore directional arrows—it can’t answer these questions. And worse, it can be misleading, as the down-arrow may continue until glucose levels rise, even if glucose levels stabilize.
In this case, I knew when I dosed last, and when I ate last, and my experience suggested that I should not eat anything so to avoid a glucose spike. This also illustrates that knowing glucose patterns alone isn’t enough; you also need to understand when you dose/ate last, and how much. You may know that, but unless you tell your automated pump, it won’t know, which hints at the realistic upper limit of fully automated insulin pumps.)
Back to the glucose chart on the G6: We can see more information after time has passed:
Hey! 108 is a good number, and the slight upward trend doesn’t bother me. Still, I’ll keep an eye on it.
Here’s the thing that made all this happen: I TRUSTED THE G6 DATA. It was smooth, the trend line was reliable, and the readings did not wobble around. Most importantly, it was TIMELY. I was able to make an in-the-moment decision when it was at 75. By contrast, if the data was “wobbly,” I would have had to wait longer—potentially much longer—before the pattern stabilized, and by then, it might have been too late had I actually needed an intervention.
To illustrate the wobbly nature of data, let’s look at that G6/G7 chart and zoom in on the 4:00pm window. Here’s an enlargement of that data:
The G6 data is smooth, but the G7 is bouncing all around like a buoy bobbing in turbulent ocean waters. They offer no help in determining directionality or rate of change.
I’ll return to this screenshot later in this article, but you can easily see that the differences between the G6 and G7 are clear and obvious: I can rely on the G6 data to make an informed decision, but anomalous spikes or dips that the G7 produces are intermittent and—by themselves—are entirely unreliable.
Later, I’ll show data illustrating that both the G6 and G7 largely reported the same average glucose levels over 30 days, with the same standard deviation. They were off by around ~1.7%. That’s marginal. From an “accuracy” point of view, they are largely identical. That, despite the fact that we have all this bouncing around from the G7.
But, perhaps these charts illustrate why it matters: It’s hard to make decisions when numbers bounce this way. Accuracy is not the problem—it’s whether the data is actionable.
Now, let’s be honest. You may not be watching your CGM like this (but you should). In fact, you may be using an automated insulin pump to do that work. But remember, those systems are doing exactly the same read-by-read analysis that I do for myself, making the same in-the-moment decisions.
Tandem's Control-IQ and Omnipod artificially “smooth” the G7 data, to match G6 data, but this approximation is not the same as providing good reliable data in the first place.
The G6 didn’t just provide “smooth” data, its algorithm made established assumptions on the likelihood that any given reading was physiologically possible. The G7’s readings are not just erratic, they are not physically possible. Your systemic glucose levels simply cannot move that way. the G7 could show intermittent spikes or dips, presumably because the algorithm is aiming to match capillary glucose levels, but now that we know that glucose doesn’t move that way, showing those huge spikes and dips can be highly misleading and lead to poor dosing decisions.
And that’s where the risk lies with G7 data. But more importantly, that’s where the risk lies with so-called “accurate” data. You don’t want accuracy, you want precision.
Read that bold text again—seriously. Out loud. With a British accent if you must. It’s that important.
Now let’s expand this to the wider data I collected over a month
Does the G7 yield greater glucose control?
Before I explain how I tested the G6 vs the G7, I need to make it clear that Dexcom’s clinical trial that demonstrated the MARD level for the G7 was not intended to claim that the G7 resulted in healthier outcomes. That’s a very different goal. The company only intended to conduct an “efficacy trial,” which is only intended to show that the sensor was good enough to be approved by the FDA.
What Dexcom did not do is perform an “effectiveness trial,” which is when test subjects would wear each of the two sensors and make real-time management decisions under real-world conditions. I explain this in much greater detail in my article on how to evaluate clinical trials.
Since no trials have tested the G7 this way, I did it on myself. As it happens, my T1D is under very tight control, where my time in range is 95%, with <2% below range (70 mg/dL) and <4% above range (180 mg/dL).
NOTE: I do not aim for this level of tight control. I do not have “targets” in mind. I am not fanatical or obsessed about numbers. I am not on a low-carb diet. I merely follow the four basic habits of T1D management, which I describe in my article, The Four Habits of Healthy T1Ds. Habit #1: Watch your CGM and get to know your patterns; Habit #2: Make small interventions with carbs or insulin “as needed.”
You can read the article to see what habits #3 and #4 are.
During March, 2023, I wore both the G6 and G7 at the same time, but would only observe data from one sensor’s app at a time to make real-time management decisions. That’s right, I did NOT look at both apps and compare them in real-time. My goal was to determine which data made it easier or better to make in-the-moment decisions. After a period of a few days, I switched to the other sensor’s app. I repeated this back-and-forth between the two sensors several times.
Upon completion of the experiment, I downloaded all my data to Excel and analyzed it to see how my TIR varied between the two. I also collected data for insulin (InPen bluetooth enabled insulin pen), carbohydrates, exercise, sleep, and glucose levels from my Contour Next One blood glucose meter (BGM), which I included in my analysis report.
The graphic below is the topline dashboard from my month wearing both the G6 and G7:
The first thing that pops out is that the G7 reported glucose values ~5% lower than the G6, which is consistent with what others have reported online. Aside from that, the two sensors appear roughly equivalent: The G6 averaged 121 mg/dL, versus the G7’s 116, and the standard deviations (SD) were 33 vs. 34, respectively.
But the real difference between the two sensors is shown by the time-in-range (TIR) stats on a day-by-day basis, as shown in the following graph:
You’ll notice the periodic pattern, where my TIR is very high, followed by days when my TIR slipped. These variations are entirely due to which sensor’s data I looked at the make in-the-moment decisions. When I used the G6 to make decisions, I achieved a TIR of >90%. When I used the G7, my TIR dropped to the ~70% range. The reason is obvious: The G7 data would at time present patterns that looked like my glucose levels were moving in a particular direction at a particular rate, which would have meant either taking insulin or consuming carbs. But in actuality, those patterns were anomalies, so my actions would cause my real glucose levels to go off in the wrong direction. And sometimes, dangerously so.
Let’s zoom into the two-hour window between 4-6pm that I illustrated earlier. This kind of movement is highly representative of the kind of volatility seen in the G7 versus the G6, and why it’s hard to make real-time decisions.
Remember, I couldn’t see the G6 data (the smoother blue graph), so at 5:30pm, and with only the G7 data in view, I saw the very rapid rise from 88 to 155 in a matter of 30 minutes. Granted, the data leading up to that was highly erratic, but these successive readings were not–they were decisively rising, and fast. Without any idea where these levels might top out, especially given the rapid rate of change, I thought I needed to start bolusing.
As I always do, I began with small, incremental boluses, keeping a close eye on those glucose levels as they rise, waiting to see when they level off or begin to fall. The goal is to avoid taking too much, or too little. I’m aiming for the Goldilocks effect.
Turns out, the G7’s data shot up to 270. If this really was my real glucose level, the stacked boluses I’d taken would have perfectly corrected these readings, and I would have had a soft landing. But, as the insulin started to kick in, my glucose levels plummeted to 49. And I was being conservative! Had I taken just one more unit, I may well have gone into hypoglycemic coma, and I might not be here to be writing this article.
It was clear to me that the G7 readings were not giving me—or anyone—reliable information. Individual readings may have been “accurate,” but they were not representative of my actual systemic glucose levels.
To achieve tight glucose control, one must be able to look at short time windows, and respond quickly to glucose movements to glucose movements, even if they are finely tuned adjustments. (Most people aren’t in tight control, and typically work on bigger time windows, so they won’t be as affected by these erratic readings.)
Over time, these anomalous readings will create more errors in decisions (or pump algorithms) than successes, which will impose an upper limit on how well they can ever perform.
Below are more daily charts to consider (without additional commentary). You can zoom in on your own and guess how/why I was able–or unable–to see trends in time to make decisions proactively.
The G7 generally reports lower BG values
While both the G6 and G7 were tighter (SD=29 and 28), the G7’s volatility is apparent.
The G7 appears to behave better this day, but real-time decisions were based on G6 data
The day was 100% in range, but the G7’s data was all over the map. (Thanks, G6!)
Summary
I personally suspect that few people will find the G7 helps T1Ds improve their glycemic control, and predict there will likely be an increase of very serious adverse events.Automated pumps may even perform worse because users are even less engaged—and usually, less informed.
Nevertheless, I suspect Dexcom is primarily focused on the value of the improved MARD rating in their marketing plans. It’s invaluable to claim that your MARD is superior to all other CGMs, regardless of the dubious value of MARD.
It also helps that Dexcom’s target market is moving well beyond T1Ds into the T2D market, where there are nearly 40 million T2Ds, with another 98 million presumed to be undiagnosed. That, plus a very rapidly emerging market of non-diabetic “life-hackers”, such as athletes, health enthusiasts, and everyday consumers. [ 2024 Update: Dexcom has released a non-prescription version of the G7 called Stelo, and these people don’t care that much about volatility. ]
Of course, the downside for T1Ds is that some could actually see worse outcomes, and not even realize it. The G7’s propensity to report lower average glucose averages (than what is actually in the bloodstream) may give people the false impression that their glycemic control has actually improved with the G7.
Dec 2025 Update
I wrote this article in Sept 2023, and predicted there would be very serious adverse events from the G7. Now in 2025, this prediction has come to pass: As of now, there have been 57 deaths (with new ones showing up every month), and several class action lawsuits by those who’ve suffered serious emergencies. These have caused Dexcom’s stock price to drop considerably. I wrote about this development in my article, Dexcom’s Stock Performance: A Business Masterclass in “Common Knowledge”.














Brilliant and articulate. Thanks, Dan.
Thank you for this article! This is the best explanation I have seen on the smoothing of the G6 data vs the G7 and its impact. I use a tandem pump and I usually get mid-80s TIR and now know why I may be struggling to get to 90. I have very good control and this helps explain the “diabetes is just weird sometimes” phenomenom.