Continuous Glucose Monitors: Does Better Accuracy Mean Better Glycemic Control?
Accuracy is good, but precision is essential.

You can listen to this AI-generated audio discussion that summarizes this article.
Introduction
Pop quiz: Your CGM displays a glucose level of 120 mg/dL, but a finger-stick blood glucose meter (BGM) displays 180 mg/dL. What do you do next?
Take insulin because your “real” blood sugar is high.
Calibrate your CGM because the BGM is “more accurate”.
Ignore it because you don’t really understand how all this works anyway.
Sorry to say, it’s a trick question. To actually know what to do, and what not to do, you need to understand a lot more about how CGMs work. And unless you know this, it’s very hard to manage T1D as well as you could.
Making in-the-moment decisions, such as whether to eat, or to bolus insulin, or do nothing at all, should never ever be done on a single glucose value. And that’s because glucose is always moving around your body in different concentrations and in different places. And learning to understand those patterns lies at the heart of self-managing T1D.
But the reality is that this process of learning to see—and react to—glucose patterns takes a while. It first requires experience with the disease itself, where you become familiar with what effects your actions have on glucose levels. Which foods will do what; the nature of exercise; how sleep and daily routines affect glucose levels. Until you develop these observations—which requires paying attention—the readings generated by your CGM can feel like watching the stock market: It’s a chart that seems to have a life of its own.
As a result, most T1Ds who wear CGMs fall into one of the following categories:
Category 1: People who are either new to T1D or haven’t taken the time to make these connections about glucose levels. They rarely look at CGM, and leave it to their care team to do analysis to see if there are patterns that might suggest adjusting basal dosing, or bolus dosages for meals, or other tips and tricks to improve glucose levels.
Category 2: People who’ve likely had T1D for a few years, are generally older than 25, or are engaged enough to realize that managing T1D requires at least some broad steps. They look at their CGM 3-5 times a day to see if glucose levels are particularly high or low before they decide to take insulin or eat (or leave it alone).
Category 3: People who are highly engaged and familiar with their personal glucose patterns. They look at their CGM, not just to react to current glucose levels, but to take action in anticipation of doing something (such as eating, sleeping, etc.).
Category 4: People who really aim to be healthy. They watch their CGM data every hour (or more) and make micro-adjustments (insulin or carbs) as necessary to maintain tighter control.
You can probably quickly identify which category you’re in. But you can probably also guess how each category would demand something different from their CGM. If you’re in category one or two—where you don’t really engage with the data very closely—any CGM on the market will suffice, because all of them will give you a generally reliable number to make very simple actions.
In other words, “accuracy” is not really essential. For example, if your CGM shows either 350 or 300, and you’d do the same thing either way, then accuracy isn’t going to affect your decisions.
Similarly for data that you’d give to your doctor for visits you do everything 3-6 months. If they’re going to analyze that much data in a big snapshot, all CGMs perform well enough for them to make some grand, sweeping recommendations. “Let’s raise your nighttime basal rate,” for example. Or, “You may need to take 15u instead of 10u for that lunchtime burrito you always have at work.”
For this group, the “accuracy” of a CGM is entirely irrelevant, because no CGM is so inaccurate that your doc can’t help you.
For those who are more engaged and watch their CGM data closely, this is when the product you use really matters. Those who seek to achieve tight control, or are using an automated insulin pump, CGMs matter a lot.
It’s here where you’d think that a more “accurate” CGM would be better. But it’s also where we get back to the topic raised earlier about the way glucose moves in the body. That is, glucose is not smoothly distributed throughout the body, like dye in water. You don’t just measure a sample of water that has dye in it and say, “It’s blue.”
Instead, glucose in the blood is more like sludge creeping down a polluted river with a lot of other muck. If you sample a cupful of that thick water, the amount of “sludge” in that sample can be weighed. But you cannot declare that the entire river has the same level of sludge. No—only that cup of sludge has that amount.
The likelihood that any single sample of sludge is representative of the total amount in the river varies depending on volatility. If the amount of sludge is low, then the variations you’ll get between each sample will be low, making the likelihood more reliable that a single test represents the entire volume.
But as the volume of sludge increases—say, you’ve just had lunch—then the distribution varies a lot. In short, when glucose levels change rapidly, up or down, OR if they are merely elevated (above 160), then variability between each sample is progressively LESS representative of the total volume.
At the end of this article, when we look at Dexcom’s data that they submitted to the FDA for approval, we’ll see exactly this phenomenon.
Since ideal T1D management relies on the ability to predict where your glucose is headed over the 30-90 minutes, and take actions (if necessary) to proactively make adjustments (insulin or carbs), can’t rely on ONE reading. You have to scoop out of a lot of cupfuls of sludge, measure each one, and then average them all together.
Now, how many samples you need and how long it takes to get each sample, well, that’s where the tradeoff is for product development. The more samples you test, the longer it takes, but the user can’t wait that long. And here is the most dispiriting part: There is NO sweet spot. It’s a tradeoff. Measuring glucose has inherent error built into it. You will never get better than the randomness built into the erratic nature of how glucose moves through the body unless and until there is some technology like that non-invasively sample large sections of your body, something akin to a CT scan.
So, this puts pressure on the FDA: How should it set guidelines to determine CGM “accuracy”? For reasons we’ll get to later, the FDA demands that EACH measurement the CGM performs must match a measurement from a separate blood glucose analyzer. The closer those two readings, the “more accurate” the CGM is purported to be. But, now that you know about how glucose is volatile, that “accuracy” is not reliable for making clinical decisions, at least, not by itself. You need a lot more data than that—namely, sets of values.
Therefore, this is exactly what some CGMs do: something akin to that “averaging” method (although, it’s not really that simple). In real-world conditions, what people (and algorithms) need is data that is both more reliably representative of what your systemic glucose levels are (within a tolerable degree of variance), but also the trajectory of movement. That is, if you see an upward trend, you want to TRUST that your glucose levels are, in fact, going up.
This is called precision—when you use multiple data points rather than one—as opposed to “accuracy”, where only ONE read has to match the value of another device. Unfortunately, the FDA doesn’t require precision because (again, we’ll cover this), but there is NO other device to compare it against. There’s no CT-scanner like system that can actually get the data necessary to compare the CGM. The only thing we have is something like a blood glucose monitor where it tests ONE drop of blood.
It also explains why “accuracy” is more of a marketing term than a meaningful measure of a CGM’s utility.
This article is less about CGMs, per se, than it is about how to use the data from CGMs to make good self-management decisions. And to do that, I compare the Dexcom G6 and G7 to illustrate these points.
To foreshadow where this is going, the chart below shows CGM data from both the G6 and G7 that I wore at the same time. The G7, in red, is purportedly “more accurate” than the G6 because it meticulously weighs each sample of sludge from the river, gives you its weight, and then moves to the next sample. The G6, by contrast, is doing a more sophisticated analysis on the signals it gets from the sensor and makes a more “precise” inference on what the body’s glucose levels are. That is, “precision” is a far more useful metric because that’s where adjacent readings are more in line with one another, so you can more easily see glucose trends.
You can see the black circles represent the BGM (blood glucose monitor) finger-prick reading is often far off from either of the CGMs.
If you’re a T1D and learning to make in-the-moment management decisions—or rely on an automated insulin pump to read this data—which would you prefer? The “accurate” G7? Or the “precise” G6?
If this is all over your head, you’re not alone. But the aims of this article is to help you better understand how to read glucose charts so you can move up at least one category and be better at making in-the-moment management decisions.
This article comes in four parts. And no, this very long introduction wasn’t a “part.” It just gave you a high level overview of everything we’re going to cover. But now that you’ve read it, the parts will be easier to understand.
Part 1 covers how glucose behaves within the body. Part 2 involves ways that CGMs try to measure glucose. Part 3 is my experiment that illustrates how I use CGM data to manage T1D, comparing the G6 and G7. Part 4 gets into the weeds on Dexcom’s clinical trial that it used to get FDA approval for the G7, as this puts into perspective the real-world situations that CGM technology faces.
Let’s start with the basics.
Glucose’s erratic and seemingly random behavior
In my article, “Why Controlling Glucose is so Tricky,” I explain how the brain holds 20% of the body’s total glucose volume at any given time, and it burns ~78.4 mg of glucose per minute. That’s about 1g of glucose every 12 minutes. And yet, insulin is not involved in the brain’s metabolization of glucose.
Muscles are the largest consumer of glucose in the body, and insulin is also not involved in much of that activity. Yes, insulin is involved in delivering glucose to supporting tissues around muscles, so there’s a lot of complex throughput going on.
Your glucose is shuttled here and there and everywhere, in and out of organs and muscles, at different rates of concentration, that measuring the exact amount of glucose in your body is impossible.
You can test this yourself by using a BGM and testing fingertips, arms, toes, legs, etc. More often than not, you’ll get different readings from each location. Here, people think of the differences in these glucose levels as a delay in glucose movement—where it just takes more time to travel from one part to another. This is the “lag” effect you may have heard about between a finger-prick test and a CGM that reads from interstitial tissues.
But that’s a fallacy. The “lag effect” actually stems from a phenomenon where glucose levels are tested in highly controlled lab studies. Clinicians infuse glucose into people’s veins, and then test how long it takes for those glucose levels to reach interstitial tissue. During this time, the patient isn’t moving at all. The researchers don’t change the glucose levels, or impose any other disturbances in the body. They just want to see how long it takes for glucose to get from blood to interstitial tissue. (While the average is 15 minutes, there’s great variability among individuals, and times of day, etc.)
This is the “lag” people refer to. But these are not real-world conditions. People move around, eat, exercise (even just walking). This is where glucose volatility shows up.
To read and learn more about the erratic nature of blood glucose levels, see the article, “Differences in venous, capillary and interstitial glucose concentrations,” where the authors explain how and why there’s great variability in glucose readings using different types of measuring methods, and how that variability isn’t just a matter of diffusion time (delay).
Let’s now observe this in the same CGM chart shown earlier. Notice the BGM values (the black circles).
They are not only nowhere near the CGM levels, but they were not even close to being “delayed” values. Sure, they were approximately where glucose levels were, but, well, “so what?” There’s no true value that can be gained from this observation.
Whenever CGM levels are volatile, as in the chart above, it’s likely due to a constellation of many things going on at once: Exercise, hypoglycemia, eating certain kinds of foods, and cortisol levels are only a few of dozens of things that affect glucose levels.
This is why Dexcom doesn’t recommend attempting to calibrate a CGM while your glucose levels are not stable. For example, if your CGM levels are really flat, say, around 150 for 30m or more, but you “feel” like you’re running low, this discordance suggests your CGM may need calibration. So, you do a BGM test. If that shows 80 mg/dL, for example, then this is when you calibrate your CGM.
Remember the pop quiz at the beginning? A finger-prick blood glucose test might yield a value like 180 mg/dL, but the CGM might only show 120. Which value is “correct” is not straightforward, nor will it always be consistent. If you were to treat the 180 value as “correct,” and administer 2u of insulin, that may work… or it might be an overkill if the 180 value was more of an anomaly due to the erratic nature of glucose concentration levels. Here, the CGM value of 120 might have been wiser.
As we’ll see later, you should NEVER make treatment decisions on single values. What you really need to know is the trend pattern that leads up to the current number, and you need to infer (or “predict”) where your glucose is headed.
Yes, both of those. Let’s unpack this.
CGMs and T1D management
First, by “trend”, I’m NOT talk about the directional arrow you see on CGM apps like this one from Dexcom:
The “down arrow” on this Dexcom reading indicates glucose levels are dropping. And yes, 75 is definitely low. But you can’t infer very much more than that. You cannot make an informed decision solely from this reading. Let’s expand to see the whole chart now:
Here, we see that the levels were quite high (180s), then dropped to roughly the 150s, and then suddenly took a dive. But we can also see that it’s starting to level off. What do you think the next read will be? How about 30m from now?
That’s why I ignore directional arrows—it can’t answer these questions. And worse,it can distract you from what you should be doing: Learning how to read CGM charts.
Ok, let’s be honest. My glucose level was 75 and dropping. If you’re like most T1Ds, you’re not thinking about anything I just said. You’re looking for the nearest donut, cookies, candy, or dextrose tabs (the best choice, of course).
Now that you’ve eaten it (or rather, I have, haha), this would be a good time to ask where you think things are going next?
Since this is my own data, I’ll tell you that I ate four Dextrose tabs (15g), because I factor in many things: When I took insulin last, how much I took, what and when I ate last, my typical rate of insulin absorption (at least, at today’s rate, which may not always be consistent from one day to the next), and what I’m going to do next (sit and keep writing this article).
Knowing all those things (at least!) is how T1D management is done best. This is a Category 4 T1D: I pay attention to things I do and how they affect glucose values (the CGM chart).
Note: my glucose dropped because I took one too many units for lunch, as it turns out. Hey, this is the nature of T1D. Things happen. It’s a highly complex disease. But by paying attention and making small interventions, you prevent catastrophes.
Ok, enough time has passed. Let’s look at the chart again.
Hey! 108 is a good number, and the slight upward trend doesn’t bother me. Still, I’ll keep an eye on it.
Here’s the thing that made all this happen: I TRUSTED THE DATA. It was smooth, the trend line was reliable, and the readings did not wobble around. I was able to make an in-the-moment decision when it was at 75. That would not have been possible if I were NOT watching my CGM, or if the CGMs data was “wobbly.” If it were, I would have had to wait some unknown amount of time before it stabilized, and by then, it might have been too late.
To illustrate, let’s look at that G6/G7 chart one more time and try to consider what we just learned from the incidental readings above. The G7 data is “wobbly” — can you make good decisions from that data? Look at around 3am, when my G6 showed a bounce from about 85. The G7 showed wobbly numbers that were clearly well below 60.
Anomalous spikes or dips that the G7 produces are intermittent and—by themselves—are entirely unreliable. You cannot and should not take insulin or eat just because there seems to be one or two (or more!) individual readings that move suddenly up or down. You have to wait much longer before you can infer a reliable pattern, and by then, you may have missed real, actionable opportunities to make corrections. And that’s a big deal in T1D management. Time is critical when it comes to glycemic management, because things spin out of control very fast.
Now, let’s be honest. If you’re a Category 1 or 2 person, you may not be watching your CGM like this. In fact, you may be using an automated insulin pump to do that work. But remember, those systems are doing exactly the same read-by-read analysis that I do for myself, making the same in-the-moment decisions I described earlier. No algorithm can figure out G7 data any better than you can. So, those systems won’t work if the data being observed is not reliable enough. Garbage in, garbage out.
And that’s where the risk lies with G7 data. But more importantly, that’s where the risk lies with so-called “accurate” data. You don’t want accuracy, you want precision.
This is not just conjecture—researchers have studied this phenomenon and published their results in the article, “Limits to the Evaluation of the Accuracy of Continuous Glucose Monitoring Systems by Clinical Trials,” where the authors describe the erratic and random patterns of glucose fluctuations, and call into question the appropriateness of how clinical trials for CGMs are conducted in the first place.
Read that bold text again, but this time, say it verbally and really loudly. In fact, scream it. Use a british accent to make it sound more authoritative.
I gave the example of the low glucose reading of 75, and how I made that one decision. To put this to a much longer term test, I wore a G6 and G7 at the same time for a month to see which sensor gives better data to make better clinical decisions. Read on.
Does the G7 yield greater glucose control?
Before I explain how I tested the G6 vs the G7, I need to make it clear that Dexcom’s clinical trial that demonstrated the MARD level for the G7 was not intended to claim that the G7 resulted in healthier outcomes. That’s a very different goal. The company only intended to conduct an “efficacy trial,” which is only intended to show that the sensor was good enough to be approved by the FDA.
What Dexcom did not do is perform an “effectiveness trial,” which is when test subjects would wear each of the two sensors and make real-time management decisions under real-world conditions. I explain this in much greater detail in my article on how to evaluate clinical trials.
Since no trials have been done like this for the G7, I tried to do it on myself. As it happens, my T1D is under very tight control, where my time in range is 95%, with <2% below range (70 mg/dL) and <4% above range (180 mg/dL).
NOTE: I do not aim for this level of tight control. I do not have “targets” in mind. I am not fanatical or obsessed about numbers. I merely follow the four basic habits of T1D management, which I describe in my article, Self-Identity and the Four Habits of Healthy T1Ds. Habit #1: Watch your CGM and get to know your patterns; Habit #2: Make small interventions with carbs or insulin “as needed.”
You can read the article to see what habits #3 and #4 are.
As the data from my experiments will show below, I was only able to achieve a TIR of 75-80% using the G7. What’s more, I also experienced considerably more hypo events and greater variability with the G7, which can be harmful.
There’s a lot of detail here, so let’s start with my experiment.
During March, 2023, I wore both the G6 and G7 at the same time, but would only observe data from one sensor’s app at a time to make real-time management decisions. The goal was to determine which data made it easier or better to make in-the-moment decisions. After a period of a few days, I switched to the other sensor’s app, and repeated this pattern several times.
Upon completion of the experiment, I downloaded all my data to Excel and analyzed it to see how my TIR varied between the two. (I also collected data for insulin (InPen bluetooth enabled insulin pen), carbohydrates, exercise, sleep, and glucose levels from my Contour Next One blood glucose meter (BGM), which I included in my analysis report.)
The graphic below is the topline dashboard from my month wearing both the G6 and G7:
The first thing that pops out is that the G7 reported glucose values ~5% lower than the G6 (consistent with what others have reported online). Aside from that, the two sensors appear roughly equivalent: The G6 averaged 121 mg/dL, versus the G7’s 116, and the standard deviations (SD) were 33 vs. 34, respectively.
But the real difference between the two sensors is shown by the time-in-range (TIR) stats on a day-by-day basis, as shown in the following graph:
When I used the G6 to make decisions, I achieved a TIR of >90%. When I used the G7, my TIR dropped to the ~70% range. To understand why the G7 made it harder for more to maintain glycemic control, let’s look more closely at the earlier chart, which is a day where my decisions were governed by the G7’s data.
Now, let’s zoom into the two-hour window between 4-6pm, which is highly representative of the kind of volatility there is in G7 data versus the G6, and why it’s hard to make real-time decisions.
Remember, I couldn’t see the G6 data (the smoother blue graph), so at 5:30pm, and with only the G7 data in view, I saw the very rapid rise from 88 to 155 in a matter of 30 minutes. Granted, the data leading up to that was highly erratic, but these successive readings were not–they were decisively rising, and fast. Without any idea where these levels might top out, I knew I needed to start bolusing.
As I always do, I began with small, incremental boluses, keeping a close eye on those glucose levels as they rise, waiting to see when they level off or begin to fall. The goal is to avoid taking too much, or too little. I’m aiming for the Goldilocks effect.
Turns out, the G7’s data shot up to 270. If this really was my real glucose level, the stacked boluses I’d taken would have perfectly corrected these readings, and I would have had a soft landing. But, as the insulin started to kick in, my glucose levels plummeted to 49, making it clear to me that the G7 readings were not giving me reliable information. Individual readings may have been “accurate,” but they were not representative of my actual systemic glucose levels.
To achieve tight glucose control, one must be able to look at short time windows, and act as quickly as possible to glucose movements, even if they are finely tuned adjustments. (Most people aren’t in tight control, and typically work on bigger time windows, so they won’t be as affected by these erratic readings.) The conundrum for the G7 is that sugars may look like they’re starting to move up/down, but then the data suddenly reverses 30 minutes later because those earlier readings were anomalous.
Over time, these anomalous readings will create more errors in decisions (or pump algorithms) than successes, which will impose an upper limit on how well one can actually do.
Below are more daily charts to consider (without additional commentary). You can zoom in on your own and guess how/why I was able–or unable–to see trends in time to make decisions proactively.
The G7 generally reports lower BG values
While both the G6 and G7 were tighter (SD=29 and 28), the G7’s volatility is apparent.
The G7 appears to behave better this day, but real-time decisions were based on G6 data
The day was 100% in range, but the G7’s data was all over the map. (Thanks, G6!)
The Dexcom G7 trial: Exploring the futility of “accuracy.”
In Dexcom’s published report, “Accuracy and Safety of Dexcom G7 Continuous Glucose Monitoring in Adults with Diabetes,” 318 diabetic subjects wore three G7 sensors simultaneously over the course of ten days. For three of these days, subjects underwent clinically induced hyperglycemia and hypoglycemia under controlled conditions, where blood samples were taken and measured using a reference blood glucose sensor, the YSI 2300 Stat Plus glucose analyzer. The analysis showed that the “mean absolute relative difference” (MARD) between the two was ~8.8% for the G7, versus ~10% for the G6. The lower the percentage, the smaller the difference to the reference analyzer. Hence, greater accuracy.
Let me remind the reader that the G7 trial had subjects where THREE G7s simultaneously during the testing period. When blood samples were taken and measured on the iStat device, it then compared that value to G7 readings, but if the person was wearing three, which of the three G7s was it measured against? Were all three averaged together? Was it only one? Which one? Or, did Dexcom choose which of the three that happened to be closest to the iStat? The company doesn’t reveal this in the trial data, and that alone raises eyebrows to me.
For this reason alone (though not necessarily so), making any claims about MARD should not be taken at face value.
Moreover, MARD isn’t just one reading or the whole sensor’s performance. MARD values in Dexcom’s data varied widely, especially under different conditions, such as glucose levels and rates of change, as shown in this figure from their report.
The mean and median per-sensor MARDs were 8.8% and 7.8%, respectively, 442 (71.4%) had MARD values <10%, and 12 (1.9%) had MARD values >20%.
According Dexcom’s data, the G7’s MARD value was best when glucose values were in the sweet spot of glycemic ranges, but diminished as glucose levels edged higher. This bar graph suggests the best MARD happened most often at ideal glucose ranges, but most T1Ds only spend about 30% of their day in those ranges. The rest of their day is spent far outside, usually well above 180 mg/dL, where the G7’s MARD rating is well above 14%.
What is also not revealed in Dexcom’s report is the rate of change (ROC), which can also greatly affect MARD. Once again, if you visualize glucose being highly volatile in fluid, imagine how much greater that volatility is when glucose is rushing in or out of that fluid rapidly. It’s like injecting dye into a vat of water: You’re going to see a lot of dense color in some places more than others before the dye diffuses and settles out into a consistent hue.
In the case of glucose levels rising or falling, Dexcom limited its testing to only 1 mg/dL change per minute, which showed some of the worst performing MARD values. In the real world, once a T1D eats a meal, glycemic levels can change at 2-4 mg/dL per minute as a matter of course. Relying on CGMs to capture that data is prone to significant error bars. (The G6’s algorithm is far superior in this regard for smoothing out these errors and giving the user or algorithm more reliable data to work with.)
To what degree this variability in MARD plays into real-world conditions, we can look at this meta-analysis of multiple studies on overall glucose levels for T1Ds who wear CGMs. It shows that only 30% of T1Ds have glucose ranges between 70-180 mg/dL 70% of the time, which is where the G7 is most accurate. By contrast, 80% of T1Ds spend more than 70% of their time above 180 md/dL, where the G7’s accuracy exceeds 30% error. (For context, 44.5% have an A1c between 7–9%, 32.5% exceed 9%, and only 23% of T1Ds had an A1c <7%.)
Despite the fact that the G7 is the most accurate at glucose levels between 70-180, T1Ds spend far more time far above 180. Hence, T1Ds are experiencing accuracy error rates of >30% most of the time. This means that decisions that either humans or algorithms are going to make in whether to dose insulin or carbs are dealing with highly imperfect information (especially compared to the G6, which was more reliable.
Summary
I personally suspect that few people will find the G7 helps T1Ds improve their glycemic control. This will also be a problem for automated insulin pumps for the same reasons.
Nevertheless, I suspect Dexcom is primarily focused on the value of the improved MARD rating in their marketing plans. It’s invaluable to claim that your MARD is superior to all other CGMs, regardless of the dubious value of MARD for CGMs.
It also helps that Dexcom’s target market is moving well beyond T1Ds. There are nearly 40 million type 2 diabetics (with another 98 million presumed to be undiagnosed T2Ds), compared to roughly 2 million T1Ds. That, plus a very rapidly emerging market of non-diabetic “life-hackers”, such as athletes, health enthusiasts, and everyday consumers. In fact, Dexcom has released a non-prescription version of the G7 called Stelo, and these people don’t care that much about volatility.
Of course, the downside for T1Ds is that some could actually see worse outcomes, and not even know it. The G7’s propensity to report lower average glucose averages (than what is actually in the bloodstream) may give people the false impression that their glycemic control has actually improved with the G7.
I hope the G6 never goes away. But a better idea is to provide the G6’s algorithms to a G7 sensor. Here’s a marketing plan: Sell the G7 with the G6 algorithm as the less expensive over-the-counter product for the comparatively fewer number of T1Ds that actually need higher quality data to manage glucose levels. We’re already paying too much for all the other stuff we have to buy, and we’re a tiny market compared to the rest of the world. This way, everyone’s a winner!
Thank you for this article! This is the best explanation I have seen on the smoothing of the G6 data vs the G7 and its impact. I use a tandem pump and I usually get mid-80s TIR and now know why I may be struggling to get to 90. I have very good control and this helps explain the “diabetes is just weird sometimes” phenomenom.
I suspect Freestyle Libre 3 is even “worse” than G7 with it’s MARD of 7.9% and one minute sampling rate. G7 on steroids :D I switched recently from G6 and what a ride I had yesterday when trying to fix one hypo got 3 consecutive nasty ones instead (tried to bolus after each climb out of low so that blood sugar wouldn’t skyrocket later). The readings just aren’t as predictable, they jump around and I ended up reacting too soon… But I will keep using it, who knows I might see some patterns in time and get a handle of it hopefully, I just like every minute readings too much.