AI and T1D: From OMG! to Oy.
Artificial Intelligence provides both amazement and danger for T1Ds
Whenever people ask me whether I worry about artificial intelligence, I cite Homer Simpson’s philosophy about beer: “It’s the cause of, and solution to, all of life’s problems.”
This duality is actually more apropos than you may think. Unless you yourself are an artificial intelligence, chances are that you are either excited or terrified by the new world order being shaped by AI. Perhaps both.
Fine, but let’s get to what we really want to know: How will AI affect type 1 diabetes?
My quick and snarky answer is that the best artificial thing T1Ds can hope for is a pancreas. Ba da boom.
But seriously, folks… AI and T1D: Yeah, there’s a lot there. It takes me back to 1985, when I got my degree in computer science with a focus on AI. Back then, it was all the rage, not too differently than today, actually. A year earlier, Steve Jobs showed the very first Apple Macintosh on stage, where he turned on the computer, and it actually said, “Hello.” The audience was stunned and amazed, much the way the world reacted to ChatGPT 3.5 just over a year ago.
It felt so alive.
In 1985, we heard talk of neural networks and other jargon that we hear today, and imaginations were also similar: hopes, fears, opportunities and risks. It may be hard to believe, but back then, everyone was more than eager to share everything about themselves—their personal data, including health information—in the hopes of having more personalized experiences, from advertising (yes, personalized ads!), news (disinformation? Never heard of it), and of course, healthcare. Accordingly, new startups were getting huge investments from venture capital, and all we newly-minted CS majors were getting high-paying jobs.
Alas, it never came to be, mostly because there was insufficient infrastructure: computing power, physical resources, and “electronic data” were all either non-existent or in their infancy. “The internet” as we know it today, didn’t really begin to take shape till the mid-1990s, ten years away.
At the time, AI was doomed to failure, but this came as no surprise to our advisors and college professors, who’d been through it all before. They told us about the old days—in the 1950s and 60s—where the same hopes and concerns about AI were voiced when similar “advances” reached the public consciousness.
Consider this segment from a 1960s TV show, where a father is trying to explain to his son how the technical requirements for an artificial intelligence would be too great to actually build, requiring equipment that would be the size of Jupiter. He also waxes philosophically, saying that artificial intelligence presents “a philosophical and semantic paradox. And hence, permanently an impossibility.” (If you know what TV show that was, let me know!)
They told us, “The more things change, the more they stay the same.”
Needless to say, today’s AI is obviously much more advanced, but it’s also more tangible. It can do things that were really only speculative in 1985. But rather than my waxing into the philosophical realm, let’s get back to how AI can affect T1Ds, and whether it will, once again, all blow up like it did in the 80s.
For T1Ds, there are two realms of AI to consider: The algorithmic “intelligence” that drives automated insulin delivery systems, and the “large language models” (LLMs) that drive information.
Let’s begin with the easier one: Automation.
AI and insulin delivery automation
To set the stage, today’s AID systems (also called closed-loop) are pretty remarkable in what they can achieve. Medical literature is rife with success stories about how people with A1c’s as high as 13% are using these fully-automated systems to get down to the 8% range, and sometimes lower, without requiring the user to take any action or even pay attention. These people can just let the system work automatically.
Ideal users are children, many adolescents, the elderly, or those who are either unable—or unwilling—to self-manage their glucose. Let’s be honest, this is a huge percentage of the T1D population.
But it’s not the majority, which is a key detail here. Also, there’s the subtle clue that AID performance is best for the groups listed above, which is a long-winded way of saying, “anyone who can’t achieve A1c levels below 8%.”
What’s magic about 8% is that, in fully automated mode, this is about as good as AID systems can perform entirely on their own. To do better, users provide some level of additional help, such as announcing meals (carb counts), exercise, or anything else that can hint to the algorithm ways to improve insulin dosing. For users willing to do this, they can sometimes achieve levels of 7%.
As studies seem to show, the more engaged the user is willing to be, the better the performance. But therein lies the gray line between where the AID algorithm is helping, and where the user is really the one in control. And that’s a good thing, actually. All studies show that the more engaged the user is in self-management, the healthier they are, not just in A1c levels, but in time-in-range (TIR) and other metabolic functions (cardio fitness, lipid levels, etc.).
But here’s where we’re moving away from “automation.” The dream of AI is that the automation will just get better, leading to even lower A1c levels, while also minimizing risks for hypoglycemia, and still relieve the burden of self-management for a larger proportion of T1Ds.
While that would be nice, the medical literature points out three principal challenges that suggest that the current state of AID algorithms is as good as they can possibly get, and to hope otherwise is creating a moral hazard: Using the current AID systems, or waiting for them to get better, is keeping people from engaging in self-management practices that actually yield much healthier outcomes.
Here are the three principal factors that keep AID algorithms from improving beyond their current state of technology.
The algorithms estimate insulin dosing based on glucose levels and patterns, but the way the natural body regulates glucose levels involves more than just insulin—other hormones and glycemic control mechanisms are involved, so the amount of insulin it secretes goes way beyond just glucose levels. We don’t have the technology to detect these other hormones, and likely never will because there are just too many of them, not to mention understanding their mechanism of action to build into an algorithm. These other hormones are not just hard to detect, but they are highly volatile, introducing stochasticity, or a tendency towards randomness, into the holistic system.
By using only glucose levels to determine insulin dosing—even in semi-automated mode, where users input carbs or other events—the randomness of the human body’s metabolic functions are such that there algorithms are too error-prone to do better than their current level of performance. (For more on the erratic nature of glycemic control, see my article, “Why Controlling Glucose is so Tricky.”)
Second, even metabolic stochasticity notwithstanding, glucose values are also highly subject to randomness due to their kinetic behaviors in fluids. Because CGMs get glucose levels through interstitial tissue, the stochasticity is even more exaggerated between what the CGM reads and one’s systemic glucose levels within the body. This means that there’s even more error in glucose level precision, creating even more stochasticity that the algorithm cannot compensate for. (Further details can be seen in my article, “Continuous Glucose Monitors: Does Better Accuracy Mean Better Glycemic Control?”)
Third, insulin is delivered through interstitial tissue, which introduces even further randomness due to absorption variability. As all T1Ds have experienced, there’s a lot of variability in the time it takes for any given injection (infusion) of insulin to take effect. Whether this variability is 15 minutes or 90, the algorithm has no way to know how much insulin is actually onboard at any given time. Sure, it can estimate, but again, stochasticity increases error rates. (More on these details are in my article, “Conditions Where Insulin Pumps May Not Deliver Intended Doses.”)
Taken together, these three principal factors impose an upper limit on how effective AID algorithms can be in controlling glucose levels, and most medical literature points to studies showing that the current level of control that algorithms provide is as good as it can get (barring any new invention, such as direct-to-blood CGM and/or insulin delivery).
Everything discussed above is provided in greater detail, including citations from medical literature, in two articles: The first is “Benefits and Risks of Insulin Pumps and Closed-Loop Delivery Systems,” and the second is, “Challenges Facing Automated Insulin Delivery Systems.”
Since AI isn’t really going to contribute much more on the technology side, we can instead look to what AI can do for education: making complex topics easier to understand and more widely disseminated so people can apply more personalized self-management techniques to their everyday lives.
That’s easier said than done, and it’s a very windy road to get there, so let’s get started.
AI and the Information Realm
Where AI can really affect T1Ds is not in technology or devices, but what people think and believe about T1D.
Yes, AI can distill highly complex information into a format that less technical people can understand, but it can only be helpful if the source information is credible. If the information is not good, well, that’s an even harder problem to undo because bad information permeates the system, and it takes far more information to undo what AI thinks is true from the old information.
For context, let me backup a moment with something that may seem entirely unrelated, but you’ll see how it ties together.
Am I starting a podcast? Well, it’s not that simple.
I’ve recently gotten a lot of emails from people asking me if I’m starting a podcast because I’ve begun to attach “podcasts” of my articles to my articles. In these podcasts, two hosts summarize what I talk about in the article I’ve attached it to.
Here’s the first minute or so from a 13-minute podcast that covers my article, “Benefits and Risks of Insulin Pumps and Closed-Loop Delivery Systems.”
Another example is this 45-second intro from a 9-minute podcast covering my article, “Who’s the Grand Wizard of T1D Knowledge?” that discusses how to understand medical research. Give it a listen:
One more example here—it’s a 35-second clip from a 7-minute podcast reviewing my article, “T1D and Health: How Long Will You Live?”
Do these sound compelling? Do you think they’d be better than actually reading the articles? Some have told me they prefer them—they can be heard while driving, exercising, or doing anything else. Podcasts are the new media choice for an increasingly larger number of people. Furthermore, some studies show that people actually learn more by listening than by reading.
For the record, I’m not starting a podcast. But I do find that these podcast-style explainers for my articles are the next best thing.
Here’s the shocker: This isn’t a podcast, and the hosts aren’t even real people. They’re synthetic voices by a new AI application from Google called, NotebookLM.
Impressive, right? AI is making big leaps that continue to amaze us. The best part is how easy it is.
Technical wizardry and parlor-room tricks aside, what I was more impressed by was Notebook’s understanding of the material I provided and the manner in which it knew what points to emphasize, the story arc, analogies, idiomatic expressions (many of which happen to be very particular to T1Ds), and how voices should rise or fall to evoke emotions (surprise, amazement, and many others).
In order to make those decisions, Notebook has to have a much greater understanding of the material beyond just the facts. That’s a level of understanding that transcends the content. Combine the two, and the podcast sounds persuasive.
You can see where this is going. PERSUASION is the primary currency in the attention economy. While this can be great, there are also perils: What if I fed it false or misleading information? Because AI has no ability to fact-check, which is another domain entirely, anyone could make a podcast that can persuade a large number of people about false information. You know, just like TV.
The more things change, the more they stay the same.
So, how do we know that what my articles say—and what the podcasts hosts say about them—is credible? Because I have a pretty extensive background in reviewing medical research as my role as the CEO of a medical diagnostics company. For more on how I review medical journals, see my article, “Who’s the Grand Wizard of T1D Knowledge?” The best primer for understanding clinical trials, see Dr. Peter Attia’s article and podcast episode, “Good vs. bad science: how to read and understand scientific studies.”
Combining my background with my fifty years of being a T1D, I can delve into nuanced medical journals and discern validity. Now, to be clear, I am far from a top-researcher that could review any kind of research, study, trial or medical claims. I know my limitations, so I focus my attention only on the most widely and commonly published topics in the diabetes space, and typically focus on comprehensive literature reviews conducted by larger teams of experts who’ve compiled hundreds of thousands of prior studies to validate their findings. In other words, I’m pretty conservative in what I look for.
When it comes to AI, feeding it known, quality content so it can sift through details to build a new narrative, or put things together in ways that go beyond the science, it can communicate information in ways that may have a bigger impact than just the original raw material.
This is what I found to be the best advantage to Notebook AI and how it made these podcasts on my articles, because I couldn’t have done that myself. And that’s a hint as to where the future of AI can be very beneficial in medicine, and T1D in particular. As long as AI is being given good, quality information to digest, it will produce fantastic output.
That’s my hope for the future of T1D, but to get there, that’s a long road.
Today, most T1Ds are getting their primary information about their disease from more sources than just clinicians, or even despite clinicians. They’re getting it online, in social media, and chat groups, and those areas of the internet are not full of well-informed people in the medical field. So, whatever is discussed there is, shall we say, a bit off the mark.
Here’s the problem: AI crawlers pick up on what’s discussed online, but as cited earlier, they don’t know what’s true—AI is nothing more than a statistic engine. AI only knows what’s stated more often.
People may mention clinical trials that show how a particular closed-loop insulin pump performs well, but neither the people nor the AI knows how to evaluate those assertions, to discern which clinical trials are done well, and which aren’t. Does AI know that study subjects may or may not be representative of T1Ds in general? Will it understand that some trials are designed to favor particular outcomes?
Most people don’t know this, either! And yet, we expect AI to be smarter than us. And that’s where Homer Simpson’s quote applies: Without realizing it, AI is beginning to be the cause of these problems.
The Economics of AI
Oh, speaking of Homer Simpson, he works for Springfield Nuclear Power, which is related to this topic of AI. Fun fact: Microsoft just signed a deal with Constellation Energy to restart the Three Mile Island nuclear power plant because AI data centers require so much energy that nuclear power may be the way to generate all those fake political ads spreading across the internet.
And this is just the beginning. All the major AI companies are in talks with nuclear power operators for this very purpose. By the time this article is published, there will likely be ten more deals inked.
According to a July 2024 article in the NYTimes, the amount of electricity needed to power AI data centers was estimated to range from 8%-10% of all power generated in the United States by 2030. And yet, a mere four months later, new estimates have that reaching 17% of total energy.
Given the high expense associated with generating this much energy, there must be some economic upside. Indeed, the critical question is what that business model is.
It all comes down to this: AI needs information to generate information. The future of AI—or any technology at all—relies on the fact that the product will be sold for more than it costs to build it. If AI is going to require billion-dollar nuclear power plants, not to mention billions more in computer servers installed in billion-dollar data centers, surely it’s going to generate information that will be worth people paying large sums of money for, right? I mean, how much will these companies have to charge for deep-fake videos?
AI needs information as input to generate its output, which is also information. It’s a weird, mind-bending exercise in recursion. Let that sink in for a moment. You may need to chew on a gummy. (Zero-sugar, remember.)
The Economics of Information
It’s fair to say that AI’s future—or, at least, the products that use it—is based on the economics of information.
All computer programmers are familiar with the saying, “garbage in, garbage out.” So, AI has to be mindful (heh heh) about the information it’s taking in. Or, at least, the people who buy or use information generated by AI.
Stated in the reverse, AI can only monetize the content it produces if it’s useful to those willing to pay for it, so there’s an incentive to source quality information if the buyers need quality information on the back end.
And quality information isn’t cheap. The more expensive it is to produce, the more incentives the owners or publishers of that information have to monetize it, whether through subscriptions, advertising, or direct sales (such as data analytics services, etc.).
Indeed, most AI today has crawled the web getting all its information for free, which is angering many companies that produce it, as they are the ones who employ teams of experts, editors and curators to generate and produce quality content into formats that their paying customers can use. The New York Times is suing OpenAI for copyright infringement, as is the Wall Street Journal and other publishers. Similarly, all other AI companies are either being sued or have been sent cease and desist letters for crawling their websites for content by these same publishers.
As a result, websites with good, credible information, which is expensive to produce, are increasingly behind paywalls, making them inaccessible to the AI bots that crawl the web to “learn.”
As higher quality information becomes less accessible and more expensive, then AI will only use (and display) less accurate info, yielding a progressively less beneficial ecosystem, devaluing its own potential, but even worse, propagating bad information to the general public.
When it comes to health-related topics, the NYTimes article “Google Is Using A.I. to Answer Your Health Questions. Should You Trust It?” says that the problem with AI is that “its answers are shaped by websites that aren’t grounded in science, so it might offer advice that goes against medical guidance or poses a risk to a person’s health.”
The bottom of the barrel of information quality is social media: Facebook, Reddit, Instagram, TikTok, YouTube, and so on. None of these companies invest in making their information “good”, not that they ever claimed otherwise. They’re social by nature; their economic model is not to sell information, but to sell ad space. And advertisers love platforms with lots of eyes. In the advertising economy, neither publishers or advertisers care about truth, accuracy, or quality at all. In fact, false information—especially the kind that riles one’s emotions, to put it diplomatically—attracts more engagement (“eyes”), yielding higher ad revenue.
Unlike most social media platforms, Reddit openly invites AI crawlers to scan its content (for a modest fee, which Reddit desperately needs), which helps AI learn new expressions of speech and youthful terminology.
But it also feeds the misinformation universe.
In Reddit’s defence, they have put a lot of pressure on moderators (volunteers from the general population) to try to filter misinformation, but this is mostly in the realm of politics and high-level concepts like vaccines and the holocaust.
But when it comes to nuanced medical information, this is a much tougher challenge. Volunteer moderators who have no medical expertise are ill-equipped to attempt to filter misinformation, especially in highly complex topics like diabetes. This should come as no surprise—even the editors of highly respected medical journals have a problem sifting through articles submitted by scientists, doctors, and other experts in their respective fields.
For context, a two-part podcast from Freakonomics, “Why Is There So Much Fraud in Academia?”, cited reports showing that over 10,000 research papers were retracted in 2023 from high quality medical journals, because medical science and research is really hard and expensive to validate. That’s right, publishers also have to hire quality researchers to review the quality research submitted to them. It’s an ugly business.
And yet, the poor social media moderators who are tasked with trying to filter misinformation from their discussion groups have neither the time or expertise to do it properly, leaving the members of the group subject to… “groupthink.”
The explosion of groupthink
It’s a fact of human nature that when like-minded people gather, they tend to believe only the information within their circle, which becomes more extreme, a phenomenon called the law of group polarization. We’ve all seen this in the media environment, leading to modern day political polarization: People hang out in their own information silos, where they are shielded from ideas—including facts—that don’t comport with their pre-existing views. Other members and moderators of information within the silos not only have little incentive (or experience or expertise) to vet information from within the silo, but especially information from outside it as well.
All social media sites suffer from this for obvious reasons. When non-expert volunteers from the general public are themselves part of the discussion, they also assume the role of “moderators,” both of which contribute towards the homogenization of information. I.e., groupthink.
Several of my own newsletter subscribers who happen to be T1D practitioners have told me that the amount of misinformation on T1D social media discussion groups is rampant. Some have tried to correct it in their own posts, but community members and even moderators sometimes vociferously object if those posts don’t comport with the community’s viewpoints.
Not only is the echo chamber feeding on itself, but when AI crawls this content, it adds to the statistical weighting that the large language models put on content that’s repeated more often.
I should make note of one great benefit to social networks that has nothing to do with facts: They are fantastic forums for emotional support; the online communities of T1Ds are healthy and receptive to one another, and the value of such support should not be underestimated.
When I was a teen, the internet didn’t exist, so I had to complain about my maddening glucose swings to friends, relatives, and even my high school teachers. One incident with a teacher was going fine until she finally interrupted me to say, “Dan, this is pre-calc. I think I can speak for the whole class when I say you’re going a tad off topic.”
Sigh, there I go again: off topic. The more things change… right?
The point is, social media heightens groupthink, where people—and AI included—have drunk the kool aid within their own ecosystems, believing only what is said within its walls, giving less weight to information from outside. When quality information is behind paywalls, it makes it increasingly more difficult to convince an insulated group of information that’s not only true, but goes against the groupthink.
But AI doesn’t know this. It’s just a statistics machine, repeating whatever is said more often.
AI is groupthink gone awry.
A fantastic visual illustration of this (yes, visual!) can be found in the NYTimes article, “When A.I.’s Output Is a Threat to A.I. Itself.” As AI ingests information from the internet and spews out its own results, it builds its own homogenization of information, where the output looks increasingly identical to itself, ultimately culminating in noise.
It gets even worse: Bad actors can actually manipulate what AI engines actually think, progressively making the problem almost insurmountable, as explained by tech columnist Kevin Roose in his piece, “How Do You Change a Chatbot’s Mind?” This article discusses both the recycling of bad information, and the difficulty in undoing it. Making matters even worse is how both processes can be hijacked by bad actors.
Roose worked within the network of well-connected AI folks to figure out ways to manipulate the system, where he discovered secret codes (known “strategic text sequences”) that look like gibberish to humans, but when are posted on websites, they are recognized by AI models to feed new information to them.
These techniques are not unlike how people used to game Google’s “pagerank” algorithm to artificially elevate websites higher in search results, a technique euphemistically called search engine optimization, or SEO. Roose explains how this sort of manipulation would become its own cottage industry, much like SEO techniques had become in the 1990s and 2000s. Of course, Roose also points out that this is a cat-and-mouse game—as each gets smarter, so does the other.
The more things change, the more they stay the same.
AI and T1D: It’s more about malinformation than misinformation
Obviously, everything mentioned above applies to AI in general, so what makes T1D different in this regard?
In a word: Scale.
Wait, let me emphasize that to convey my true feelings:
SCALE.
By the numbers, there are two million T1Ds in America, whereas there are 30 million T2Ds, with an additional 98 million undiagnosed. Add another 100 million that are prediabetics—or, what I call, aspiring diabetics—and you can see how small the T1D universe is. When your world is that small, a little of bit of misinformation can have huge effects.
It’s like throwing rocks in a pond: the larger the pond, the smaller the effect any given statement has on the ecosystem. Consider the old cliche about whether cinnamon actually helps reduce glucose levels: There may be those who actually believe it, but most diabetics know better because the proportionally larger ecosystem of T2D groups and institutions corrects for it.
But T1D is more like a puddle inside a pothole on a busy street somewhere in the midwest, where there’s not enough money to pave the roads. Misinformation in the T1D world is like a car driving over that pothole, spraying water everywhere. As winter drones on, the yucky, polluted snow of misinformation just keeps falling inside, refilling the puddle, and bad drivers just keep pounding their tires on it. The lack of resources to clean it up and patch the pothole means that the problem persists, and more people get sprayed.
FYI: I’m from Ohio, I get to say all that.
So, when T1Ds say that closed-loop insulin pumps will one day relieve the mental burden of self-management for everyone, everyone hears it and believes it, and it’s amplified to eleven within the echo-chambers of social media and podcasters. Since the ecosystem is so small, the groupthink is so intensified that there’s no opportunity, let alone incentive, to argue with it. Few want (or even can) read the medical literature that suggests this utopia is not realistic. Or, at least, “not that simple.”
Indeed, “not that simple” is not saying “not true.” It’s saying there’s nuance. As stated before, there are many who benefit from AID systems, and they have experienced relief from the mental burden, especially children and teens. I know a great many people who swear by them, and do really really well!
So, the claims aren’t “misinformation,” per se. The missing nuance makes it incomplete information. The reality about AID systems is muddier and more complex, according to well-conducted trials published in dense medical journals that few people have access to. But who’s going to get into the nuances of that? (Well, I do: here ya go.)
The point is, because there are some kernels of truth to many such assertions, whether about pumps, diets, nutrition, insulin types, or other things, I prefer to call this type of thing malinformation. The elements of truth of such assertions (again, for more topics than just AID systems) is what keeps these semi-factoids in the groupthink ecosphere, but their aggregate and continued repetition is more harmful than helpful.
The medical system itself has its own problems in this regard. Marty Makary, a Johns Hopkins surgeon, talks about this in his latest book, Blind Spots: When Medicine Gets It Wrong, and What It Means for Our Health, essential reading for doctors, especially endocrinologists, because we T1Ds rely on you to be smarter than the average bear.
Makary’s book addresses the problems of groupthink and cognitive dissonance in the medical community and delves into several of the “blind spots,” where he cites examples such as treatments for appendicitis, the peanut allergy epidemic, low-fat diets, low-carb diets, diets in general, misunderstandings about hormone replacement therapy and breast cancer, antibiotic use, and the evolution of childbirth.
He explains the urgent need for reform in medical education and the major barriers standing in the way of innovative medical research. (You can also listen to an interview with him talking about his book and delving into even more detail here.)
The intersection of malinformation and AI is groupthink, because when the medical community gets things wrong, it’s very hard to make course corrections, and filter down through the information ecosystem, to clinicians and other medical practitioners, and ultimately to patients, who themselves are the moderators on social media discussion groups.
So, how can AI save us from that? Interestingly, as Homer Simpson would say, AI is both the cause and the solution. Yes, AI provides great opportunity to address the groupthink problem as well. So, put on your happy face, cuz the news gets better from here.
Good uses of AI in medicine
There are ways that AI can bring great benefit to medicine, and T1D by extension. One of the leading voices in this area is Isaac Kohane, M.D., Ph.D., a physician-scientist and chair of the Department of Biomedical Informatics at Harvard Medical School. He’s authored numerous papers and influential books on artificial intelligence, including The AI Revolution in Medicine: GPT-4 and Beyond.
In July 2024, Kohane spoke with Dr. Peter Attia for his podcast The Drive in an episode titled, “AI in medicine: its potential to revolutionize disease prediction, diagnosis, and outcomes, causes for concern in medicine and beyond, and more.” The interview is quite long and too detailed to review here (so you should listen to it when you have time), but their summarized shared belief is that AI is not likely to take over anyone’s job in medicine, and can add even greater benefits, such as automating administrative work, addressing physician shortages, among many other things.
And perhaps the greatest benefit is that AI can help make clinicians, researchers, scientists, and care providers better at what they do, more informed about their expertise, and better trained at administering their services.
The prerequisite to this is that the roots of the “quality information ecosystem” needs to be made available to AI through the editorial process of scientists, researchers, epidemiologists, and academics. When they generate the “good quality information” that makes its way into medical journals, AI content generation—much like the podcasts that AI makes with my articles—can become useful teaching tools.
Imagine working with a dietician who is far more informed about insulin-carb ratios than the legacy “low-carb” diets that are sometimes still advised because that clinician (who has access to credible research behind paywalls) can use AI to distill these details into formats that can be more easily conveyed to patients.
Similarly, endocrinologist could provide interactive tools and apps that can be more personalized in T1D administration because AI can ingest a great deal more information in less time than the doc can in a short, 15-minute appointment.
You might even get a much better analysis from your latest blood tests that detect anomalies in certain panels that might slip past your primary care provider. The list goes on and on.
T1Ds stand to benefit when more experts are more informed and can administer their services in ways that transcend the traditional model, and leverage new online tools that AI is most adept at fostering.
My experiment with Notebook to make my “podcasts” is a single example of what can be used by professionals in the medical field, and there are many more tools under development that can really leverage these systems.
In the end, AI’s greatest influence in the T1D world is not going to be found in technology—it’s going to be what it can do to help people be better informed. The challenge is that there’s no systemic incentives to do so, which is beyond the scope of this article. Unsurprisingly, it’s also the same problem that’s plagued us for a long time: Educating consumers is not easy.
The more things change, the more they stay the same.
Now that you’ve read this article, try the AI-generated podcast episode here! I’m interested to know if you like the audio or text version of this material better.
Great article which raises many viable viewpoints. MDI as a preference with good consistent results 5.8 - 6.0 ac1, with low variability. Pumps have many benefits for a large subset of folks..just not everyone
Very interesting article. I am Type 1 and 68 years of age. I have Peripheral Neuropathy despite having normal A1Cs. I keep my blood sugars in as tight a range as possible. I have been told by three neurologists that even if my BS are within a non diabetic range, I can still develop PN. So discouraging. I am on multiple injections daily. Tresiba 6 U daily. I’ve had to decrease it to 4U over the summer. Fiasp 5 U in the am, depending on my food intake and 1 U of R to cover Amy protein. This is a tough disease.