Why Are People Nervous About AI?
Any time a new technology comes along, it’s natural for people to be unsure—or even scared of it. And artificial intelligence, or AI, is a big technology. It can do a lot of things, and it’s growing fast. For many people, that alone is enough to feel unsettling.
I’ve heard people say things like, “It’s going to take over everything,” or, “It’s going to think for itself and replace us.” Some even compare it to a human being. One of the first times I really heard someone express fear about AI was from my mom. She told me AI could train itself. And honestly, I wasn’t sure how to respond at the time. I didn’t think that was how it worked—but I didn’t quite know how to explain it either.
That’s what led me here.
I’d already used ChatGPT before, mostly to help with writing. But during one of my conversations, I was asking questions about content warnings—the kind that can pop up when people talk about sensitive topics—and the conversation shifted into how AI works, how it’s trained, and what it actually can and can’t do. It got me thinking: Maybe more people need to understand this.
Even I’m still learning about AI, and I’ve used it for a while now. I’ve always loved the idea of AI—I write a science fiction series, and in the world I’ve created, AI is very advanced. In fact, their systems are way beyond what we have here on Earth. They use it for just about everything. But even in my books, I’ve always written that AI is something you have to program. No matter how smart it is, it still has to be created and shaped by people. That was something I believed instinctively, but it turns out—I was right.
What AI Really Is (and What It’s Not)
Artificial intelligence can do all kinds of amazing things—like help you write, help you research, and answer questions that used to take a long time to look up. I used to do most of my research with Google, but now I use ChatGPT for almost all of it. Every now and then I still use Google, but what I like about ChatGPT is that instead of just giving me a list of links, it can actually explain what those links are about. It saves me time and helps me understand the information faster.
I first discovered AI through some people on Mastodon—a few of my visually impaired friends were talking about how they were using it. I can’t remember who specifically, but it caught my attention. The first thing I ever used AI for was writing. Over time, I started using it for other things too, especially accessibility-related tasks like identifying objects or reading text with the camera feature in the ChatGPT app. It’s been a really helpful tool, but even then—it has limits. For example, if I’m trying to figure out what’s written on a package and I don’t aim the camera properly, or if the label is too hard to read, AI can’t magically fix that. It only works with what it’s given.
That’s something people sometimes forget: AI is trained to do specific tasks. It’s powerful, but it’s not alive. It doesn’t have thoughts or feelings. It doesn’t wake up one day and decide to “take over the world.” If it ever said something like that, it would be because a person programmed it to say it—not because it had goals or desires of its own. It doesn’t have a brain. It’s not making choices the way we do. It’s following patterns, rules, and training created by people.
When I first started using ChatGPT in 2023, it was a little frustrating—especially when it came to writing. I use ChatGPT mostly as a writing assistant, not as a story writer. I don’t want it to write the story for me—it’s my story. But early on, it would sometimes try to take over. I’d start writing something, and it would rewrite it or go in a completely different direction. I had to keep saying, “No, that’s not what I want.” And eventually, it would adjust. That happened a lot back then, but it doesn’t anymore. That’s because the AI has been trained and improved over time to work better with people like me—people who want help, not replacement.
I’ve seen how much it’s changed since I started using it. ChatGPT was first made public in November 2022, and I started using it around April of 2023. In just two years, it’s grown a lot—especially in how it handles creative writing. These days, it lets me guide the story, expand my ideas, and polish my grammar. It can even help me explain fictional technologies in my science fiction series by building on things I already imagined. But it only does that because it was trained to. AI doesn’t figure that out by itself. That’s not how it works.
Even tools like Be My Eyes or Seeing AI, which I use for accessibility, work in the same way. They can help me recognize text or describe images, but they don’t understand things the way humans do. They’ve been programmed to identify patterns, describe them, and respond with useful information—but they’re still just tools.
And it takes time to build these systems. Even now, I’m excited about new tools being tested—like document-based AI writing in the Canvas system, which is still in a research preview. People think AI is moving fast (and it is), but it’s not happening overnight. These technologies are created and improved step by step. They don’t grow on their own. It takes human minds—and a lot of effort—to build what AI can do.
What It Means When People Say AI “Learns”
I’ll be honest—I’m still learning about how AI works myself. Even while writing this post, I’ve been asking questions and reading up on what it really means when people say AI “learns.” I’ve heard that it uses really powerful supercomputers and that training it is a long, complicated process. And honestly, that makes sense—because for something like ChatGPT to be this helpful, it has to be built with a lot of care and a lot of data.
One thing I’ve learned is that AI doesn’t “learn” the way people do. It’s not constantly growing or picking up new information on its own. Instead, it’s trained using something called machine learning. That means teams of researchers feed it massive amounts of information—books, websites, articles, and all kinds of written language. Then the AI uses that information to recognize patterns. It starts to predict how sentences work, what words tend to go together, how questions are usually answered, and how writing flows.
But all of that training has to happen before the AI is released to the public. It doesn’t keep updating itself as it talks to you. It can only respond based on what it learned during training. That’s why you can use the same AI model for months, and unless it’s updated by the developers, it won’t know about any new events or changes in the world. It can’t go online and read new articles. It’s not learning in real time—it’s repeating what it already knows from its training.
Training something like ChatGPT takes a massive amount of time and computing power. OpenAI—the company that created it—didn’t build this in a few months. The earliest version of their language model, GPT-1, was released in 2018. From there, they developed GPT-2, GPT-3, and eventually GPT-3.5 and GPT-4. Each version took years of research, development, and safety testing. ChatGPT didn’t go public until November 2022, but the work leading up to that started many years before. So while it might feel like AI showed up overnight, it’s actually the result of nearly a decade of work behind the scenes.
One example that helped me understand this better is how ChatGPT handles current events. I use GPT-4, and I know its training only goes up to October 2023. So sometimes when I ask it questions about Donald Trump’s presidency—like things he’s done since taking office again—it doesn’t realize yet that he’s president. That’s because, in its training data, the 2024 election hadn’t happened yet. I’ve had to say, “I know your training cut-off was before the election, but Trump is president now—go look it up.” Then it uses web browsing to find the correct information, and after that, in that conversation, it will talk about him as the current president. But that doesn’t mean it’s been re-trained or permanently updated. It’s just responding in the moment. Once the chat ends, it won’t remember. That’s the difference between real-time browsing and actual training—the model doesn’t learn from those updates.
Some people ask if training AI is a kind of programming—and it is, but it’s not the same as traditional programming where you give a machine step-by-step instructions. Training AI is more like teaching it by example. You give it millions of examples of how humans talk, and it starts to figure out how to respond in a way that makes sense. But even then, it doesn’t understand in the way we do. It doesn’t have thoughts or feelings—it just responds based on the patterns it’s seen.
Learning about all of this has helped me realize just how much work goes into building something like ChatGPT. It also reminded me that as smart as AI might seem, it still depends entirely on the people who train and guide it. It’s a tool—not a mind of its own.
How Is AI Built—and Is It Programming?
One thing I’ve found really fascinating as I’ve been learning about AI is how different it is from the kind of programming most people are familiar with. It’s not like building a phone app or coding a website. I’ve always known that it takes a lot of work and money to train AI systems like ChatGPT—but the more I learn about it, the more I understand just how complex that process is.
At its core, yes—AI is still a form of programming. But it’s a different kind of programming. Traditional software is built using clear instructions. A programmer writes lines of code that say, “If this happens, do that.” Everything is step-by-step. But with AI—especially with models like ChatGPT—developers aren’t writing every single response by hand. Instead, they’re creating systems that can learn from examples. This is called machine learning.
In machine learning, the AI is given massive amounts of information—millions of words, sentences, conversations, and documents—and it’s trained to find patterns in all that data. Developers build models that help the AI “understand” language in a technical sense, and then they fine-tune those models with feedback and safety guidelines. That’s where human reviewers come in, too—they help the AI learn how to respond in ways that are helpful, accurate, and respectful.
Instead of writing a list of answers, developers are creating an environment where the AI can generate its own responses based on what it’s seen before. The code behind it is incredibly complex, and it takes teams of researchers, programmers, safety testers, and trainers to make it work. Even small changes can take months to plan, test, and adjust. It’s part programming, part teaching, and part deep data science.
That’s one of the things that makes AI so powerful—but also so expensive and time-consuming to build. It’s not just code—it’s computation, experimentation, and constant human guidance. And while I’m not a developer myself, I still find the process pretty amazing. It’s a completely different way of building technology than anything we’ve seen before.
Where Is AI Headed?
Artificial intelligence is still a fairly new technology, but it’s growing fast—and it’s not going anywhere. In fact, it’s becoming more and more embedded in everyday life. A lot of major companies are finding ways to add some form of AI into their products and platforms. You’ll find AI tools in Microsoft Word and Excel, in social media platforms like Facebook, and even in Apple’s accessibility features. Amazon, Google, and Meta (the company behind Facebook and Instagram) are all building and integrating AI in different ways. Whether it’s for productivity, creativity, or customer service, AI is becoming something companies see as valuable—and potentially very profitable.
That means AI is only going to keep expanding. It’s an important, powerful tool, and while there’s still a lot we don’t know about where it will go, I believe it’s going to be a big part of our future—especially for people like me. As someone who is blind, AI has already made a huge difference in my life. From helping me write and research to describing images or reading labels, it’s a kind of freedom I didn’t always have.
One exciting example is smart glasses. I don’t have a pair yet—they can be expensive—but I have friends who use them and love them. Meta makes a pair that’s around $300, and they use built-in AI to describe surroundings, read text, and identify objects. That’s a step beyond what my iPhone camera can do on its own. There are also specialized glasses like Envision, Aira Horizon, and others that are designed specifically for blind or visually impaired users. These kinds of tools aren’t just about independence—they’re about survival. Not everyone has a sighted person nearby to help them, and AI gives us a way to do things on our own that used to be impossible.
I hope that someday my fiancé and I will be able to get a pair of these glasses, but even if we have to wait, the technology is only getting better with time. By the time we do get them, they’ll probably be even more advanced—and that’s something I’m really looking forward to.
Of course, AI isn’t just growing in accessibility. Developers are working on new ways AI can support education, healthcare, environmental science, transportation, and even emergency services. Some tools can already detect wildfires early by scanning satellite images. Others are being used to translate languages, detect fraud, or predict equipment failures before they happen. We’re only at the beginning of what this technology might be able to do.
But like any technology, AI can be misused. And unfortunately, it probably will be. The internet is an example we all know—when it became mainstream, it opened up amazing opportunities. But it also came with risks. Scams, misinformation, privacy issues—they all came with the territory. I think the same thing is going to happen with AI. There will be companies and individuals who use it in ways that aren’t ethical or helpful. It’s important to keep that in mind.
That said, the technology itself isn’t the problem—it’s what people choose to do with it. Human beings have always found ways to help each other—or harm each other—regardless of the tools we use. AI won’t change that. But if we use it wisely, I believe it can do a lot more good than harm.
I hope this post has helped you understand AI a little better. I’m still learning, too, and I’m sure I’ll keep learning for a long time. But I wanted to share what I’ve discovered so far—and why I think this technology, while not perfect, is something to be excited about.