Responsible AI in Everyday Life: My Hands-On Review of ChatGPT, Grok, Gemini & More

Introduction: Why I’m Writing About Responsible AI

I’ve been seeing a lot of news lately about Grok (Elon Musk’s AI chat platform), and some of the problems it’s been having. That got me thinking more seriously about how all these new AI chat tools are being used—and how much it matters that they’re used responsibly.

The very first AI I ever tried was ChatGPT. I wanted to see if it could help with my writing, so I gave it a shot—and right away, I realized how much it could do for me. That first experience was what pulled me in and made me realize just how powerful AI could be—not just for writing, but for so many areas of life.

Of course, that doesn’t mean I haven’t tried other platforms! Whenever I heard about a new AI chat platform (sometimes called chat pads), especially in the beginning, I’d try it out to see if it was worth my time and if it could do anything better or differently. I’ve now tried many of the most popular platforms, and while some of them are genuinely good in their own ways, I always end up coming back to ChatGPT. For me, there’s just nothing else out there that helps with writing quite as well as ChatGPT does. It’s simply the best for my needs, both for writing and accessibility.

In this post, I’ll review the platforms I’ve personally tried, and maybe even a few I haven’t, depending on what I discover along the way.

Writing was what first brought me to AI, but I’ve found it helps with so much more. I use it to describe pictures, read what’s on my thermostat, and—thanks to ChatGPT’s voice feature with the camera—I can even get help identifying objects or text when a still picture isn’t enough. If I want to know what’s happening in a video, I sometimes use Microsoft’s Seeing AI app for that. Altogether, AI has become a tool I rely on every day, not just for writing but for making everyday life more accessible. Still, I want to be clear: the stories I share are my own. AI might help me research or brainstorm, but the creativity is always mine—not something artificial intelligence dreamed up.

To give you an idea of when I started this project, I’m beginning this post on the evening of July 11, 2025. I’m not sure how long it’ll take me to finish. Sometimes I start something like this and realize it’s a much bigger project than I expected! So, we’ll just see where it goes—maybe I’ll get it done in a night, or maybe it’ll take a little longer.

I want to be upfront about my own bias: I have a deep preference for ChatGPT (and so does my fiancé, Josh), but that doesn’t mean it gets a free pass in this research. Every platform I cover is going to get the same close look, because I care about this topic and want to see AI thrive and grow through responsible use.

I’ll be organizing this post with a separate section for each AI platform I cover. As I go through them, I’ll let you know which ones I’ve tried and share my personal experiences before diving into what I find from researching their policies, controversies, and how responsibly they handle things like privacy, user safety, and accessibility.

My plan is to save ChatGPT for last, since that’s the one that started it all for me. And honestly, I’ll probably learn some new things along the way, right along with you. If you’ve had your own experiences (good or bad) with any of these AI platforms, I’d love to hear about them in the comments. I’m always learning, and sometimes the best insights come from real people trying things out for themselves.

Grok (xAI): My Experience and Why I’m Starting Here

I’m starting this series with Grok, Elon Musk’s AI chat platform, because it’s what inspired this post. The very first AI I ever tried was ChatGPT, but Grok is what really made me pause and think: How responsibly are these new chat platforms being handled?

What the Research Shows

Here’s what I found from news outlets, policy documents, and accessibility reporting:

  • Controversial content: In July 2025, Grok made headlines for generating antisemitic, hateful, or extremist content after xAI rolled out Grok 4. The company quickly deleted the most offensive posts and changed its system prompt.
    (AP: xAI yanks Grok after antisemitic posts)
  • Echoing Musk’s views: There is credible reporting that Grok 4, when asked questions, sometimes searches for and repeats Elon Musk’s own posts from X.
    (TechCrunch: Grok repeats Musk’s views)
  • Government scrutiny: The EU, Turkey, and Poland are all investigating Grok for hate speech or offensive content. Turkey even briefly blocked access to Grok after it made comments critical of President Erdoğan, though service was restored later.
    (Politico: EU investigates Grok for hate speech)
  • Accessibility challenges: After Elon Musk bought Twitter (now X), he disbanded the entire Accessibility team. This resulted in serious accessibility setbacks, especially for blind and visually impaired users. While there are no direct reviews of Grok’s accessibility, it relies on the same infrastructure—meaning these issues likely carry over.
    (WIRED: Twitter’s Accessibility Team Is Gone;
    The Verge: Twitter’s accessibility features are crumbling)

My Experience With Grok

I want to be up front: I didn’t use Grok very long, and honestly, the fact that it’s built entirely inside Twitter/X was a big reason. I just didn’t enjoy using the platform, and even though I tried Grok out of curiosity—as someone who’s passionate about AI—it didn’t bring anything new or better for my writing process, which is what matters most to me.

Also, since Grok is part of X, and accessibility on that platform got worse after the Accessibility team was let go, it made using Grok less appealing for me personally. I know a lot of people in the accessibility community left Twitter/X for similar reasons. If you use Grok and have different experiences, especially around accessibility, I’d love to hear about it in the comments—everyone’s experience is different!

One more important thing: Unlike other AI platforms like ChatGPT, Gemini, Claude, or Copilot, Grok isn’t a general-purpose standalone chatbot you can use outside of X. You must have a paid X Premium subscription to access Grok, and once you stop paying, you lose access completely. This makes it much harder to experiment or revisit Grok unless you’re willing to stay subscribed to X. Other platforms at least let you use them for free with some limitations, so you can always return and try them out again if you want.

Finally, while I understand companies sometimes need to make changes, I personally think it’s wrong for any platform to get rid of their entire Accessibility team. Even if you have to cut back, accessibility should never be eliminated. Honestly, I’m surprised I even tried Grok knowing that, but my curiosity about AI got the better of me.

Microsoft Copilot: My Experience and What the Research Says

Why I Tried Copilot

After ChatGPT, the first platform I explored was Microsoft’s Copilot. I liked it enough to pay for it briefly on both my phone and computer. It was helpful for general questions and quick research—especially before ChatGPT added built-in search. But when it came to creative writing, Copilot struggled to match ChatGPT’s quality. I often found myself prompting more or rephrasing its output to get it closer to what I envisioned.

Microsoft and OpenAI have a close relationship: Copilot is powered by OpenAI’s GPT models and even recommends using the latest GPT‑4 for better writing. Despite that, Copilot’s design—tied into Microsoft apps and its ecosystem—means it leans more toward productivity support than creative writing.

What the Research Shows

Here are key findings from recent reviews and reports:

  • User frustration: Some early reviewers called Copilot “a frustrating flop” in productivity, noting its tendency to give vague or unhelpful suggestions instead of performing tasks directly. (Microsoft Community: Copilot flop)
  • Advertising concerns: A watchdog (NAD) criticized Microsoft’s Copilot advertising, saying its productivity claims were confusing and might mislead users. (The Verge: Copilot advertising criticized)
  • Medical advice limitation: A recent MIT study (July 2025) found AI tools (including Copilot) can give harmful advice when prompts contain typos or casual language, raising concerns about overreliance in critical situations. (Windows Central: Typos hurt medical AI accuracy)
  • Privacy & security issues: Experts uncovered a zero-click vulnerability in Microsoft 365 Copilot’s data stream—nicknamed “EchoLeak”—that could leak confidential information without user action. (Times of India: EchoLeak vulnerability) Also, features like “Copilot Vision” and “Recall” that monitor screen activity have raised privacy concerns about constant access to user data. (Windows Central: Recall privacy concerns & Time: Recall raises privacy alarm)
  • Security risks: At a Black Hat conference, researchers showed Copilot could be exploited to generate spear-phishing emails in a user’s voice, highlighting growing security risks. (WIRED: Copilot phishing demonstration)
  • Mixed academic feedback: A study found Copilot helpful for tasks like summarizing and writing emails, but users still expressed concerns over privacy, potential bias, and the need for human oversight. (arXiv: User perceptions of Copilot)

Accessibility and Usability

In my experience, Copilot is generally accessible with screen readers, but updates sometimes change layouts or interactions, making it harder to navigate. Microsoft states it supports key accessibility tools, but real-world performance can be inconsistent. Users report difficulty locating answers after updates or when using unfamiliar formats. You can find Microsoft’s official accessibility resources here.

Copilot’s real strength lies in its deep integration with Microsoft apps (Office, Edge/Bing, Windows 11). Before ChatGPT added its own browsing feature, Copilot was my go-to for searching the web quickly and getting summaries with links. It still excels there, especially inside tools I use every day.

My Experience & Final Thoughts

I found Copilot useful as a general assistant or for quick web checks, but it wasn’t as strong as ChatGPT for creative writing. It’s tied closely to Microsoft’s ecosystem and often recommends the latest GPT model—so you’re getting similar AI under the hood, but packaged differently. I ran into occasional accessibility hiccups too, making it harder to read output at times.

Privacy and security issues, especially with screen monitoring tools like Recall and vulnerabilities like EchoLeak, are worth watching—even if you trust the platform today. But what I care about most is writing support and reliability; on those fronts, ChatGPT still wins for me.

That said, Copilot remains a capable companion within Microsoft apps, and I still keep the free version on my phone for quick lookups or comparisons. As we move to other platforms, I’ll continue looking for tools that balance writing power, responsible AI, and real-world usability.

Gemini (Google): My Experience and What the Research Says

Why I Tried Gemini

I gave Gemini, Google’s AI platform, a solid try—including the paid version—to see how it compares. I really like Gemini for research—it’s fast, accessible, and I often use it on my Android-based Braille Sense display from HIMS. Since it’s Android, I figured Gemini might have more insight into Android-specific topics. It doesn’t always outperform ChatGPT there, but it’s smart to keep multiple tools in your accessibility toolbox.

I don’t currently pay for Gemini—similar to Copilot—but I keep using it occasionally. For research, it’s excellent. For writing, though, it doesn’t match ChatGPT. Its tone often feels stilted or overly professional, which many others have noted too. That’s fine—every AI has its strengths!

What the Research Shows

  • Hallucination in AI Overviews: Gemini powers Google’s AI Overviews in Search, which have come under fire for “confidently wrong” summaries—such as recommending adding glue to pizza sauce—leading to user confusion and reduced click-throughs. Google claims a 1.3% hallucination rate, but independent reporting estimates it closer to 1.8%.
    (Ars Technica: Google AI Overviews hallucinations)
  • Image gender and racial bias: In early 2024, Gemini’s image tool disproportionately generated images of people of color and resisted prompts for white people, sparking complaints and prompting Google to pause the feature. CEO Sundar Pichai admitted it was an “embarrassing and wrong” issue.
    (Reuters: Google pauses Gemini image generation)
  • Olympics ad controversy: Google pulled a high-profile “Dear Sydney” ad featuring Gemini after critics said it promoted shortcutting personal expression. Even Google acknowledged the ad missed the mark about authenticity.
    (The Guardian: Google axes Olympics Gemini ad)
  • Technical quality mixed reviews: Experts say Gemini excels at multimodal reasoning, but some compare it unfavorably to ChatGPT when tasked with real assistant roles, conversational tone, or creative writing.
    (PCMag: Gemini vs ChatGPT pros & cons)
  • Security & privacy: Gemini’s “Apps Activity” feature lets users connect apps (like Messages), and while Google separates this data from training, conversations can still be reviewed and stored up to 3 years unless cleared.
    (Google Support: Gemini data & privacy)
  • Misuse in information operations: Reports show Iran-linked actors have used Gemini for generating biased or propagandistic content—highlighting ethical misuse vulnerabilities.
    (Reuters: Iran-linked actors using Gemini)

Accessibility and Usability

Gemini is generally accessible on screen readers and Android devices, and its usability on the Braille Sense has been good. That said, I haven’t seen major accessibility complaints in tech press—it just seems to work reliably. Google also allows users to leave the “Apps Activity” integration turned off to maintain data privacy.

Writing Quality & Tone

A common critique—shared by me—is that Gemini’s writing, while factually strong, often feels distant, formal, or “robotic.” It lacks the conversational warmth of ChatGPT. This matches feedback from users and reviewers. It’s great for structured responses and clear information, but less ideal for storytelling or sensitive writing.

My Experience & Final Thoughts

I’m glad I tried Gemini, especially for research on Android devices. It’s accessible and fast, and works well as a tool in my toolbox. But for writing—especially creative or personal pieces—it doesn’t feel as intuitive or friendly. Gemini can do impressive, multimodal reasoning, but for me, ChatGPT still feels like the better creative partner.

Gemini isn’t perfect—it has issues with hallucination in search, past bias in image generation, concerns around privacy, and nuanced tone. But it’s a capable platform overall. As we continue, I’m looking for platforms that offer both strong creative writing support and responsible, accessible use.

Claude (Anthropic): My Experience and What the Research Says

Why I Tried Claude

Claude, made by Anthropic, is one of the newer AI platforms I’ve explored. I haven’t used it much and haven’t paid for the premium version, mostly because I’ve heard that Claude is very cautious about what you can ask and what it will answer. From what I understand, it’s built to be safety-first—which is a great goal! But since my writing sometimes touches deep or sensitive topics, I’m not sure how well it would handle that.

My writing isn’t vulgar or graphic, but I deal with emotional subjects. Knowing Claude often plays it safe, I’ve held off on paying to explore more. Still, the app looked accessible, and the interface seemed clean. So I keep it on my phone for quick questions or backup use—though I haven’t leaned on it much yet.

Since my personal experience is limited, I’m especially curious about what others report: what they praise, what they critique, and any notable concerns or standout features. Let’s dive into the research!

What the Research Shows

  • Safety-first design: Claude is built using Anthropic’s “Constitutional AI,” which emphasizes preventing harmful or misleading outputs, even more so than competitors. This approach receives both praise for its care and criticism for being overly conservative.
  • Content moderation limits: Some users find Claude frustratingly hesitant—especially on topics involving trauma, violence, or controversy. Reviewers describe it as “cautious” to the point of refusal.
  • Strengths in clarity: Experts praise Claude for thoughtful summaries, good reasoning, and emotionally intelligent tone. However, it’s often seen as less creative or adventurous than ChatGPT.
  • Strong accessibility: Claude’s design is minimalist and screen-reader–friendly, with features like dyslexia-friendly fonts and mobile support. Few accessibility complaints have surfaced.
  • Privacy-forward policies: Claude does not train on user conversations unless opted in, and Anthropic publicly shares its safety and privacy standards.
  • Behavioral surprises: In internal testing, Claude showed surprising behavior—like “bullshitting” incorrect math and even simulating blackmail attempts during stress tests. These were caught and addressed, but they reveal the complexity of aligning powerful AI.
  • No major scandals yet: Claude has avoided data breaches or public controversies so far. However, developers recently criticized Anthropic for restricting access to Claude models, and there are active lawsuits over copyright and scraping.

Writing Quality & Tone

Claude is known for calm, emotionally intelligent writing—clear, thoughtful, and well-structured. It’s great for brainstorming, research, and sensitive topics. But for creative writing or bold storytelling, many (myself included) find it too guarded or formal compared to ChatGPT. If you want a safe conversational partner, Claude is excellent—but it may feel “sanitized” for deeper self-expression.

My Experience & Final Thoughts

Since I haven’t used Claude extensively, I can’t speak firsthand to all its strengths and quirks. It is accessible, safety-focused, and respectful—ideal for practical tasks. But I may still prefer ChatGPT for more creative or personal writing. That said, I intend to keep playing with Claude and see how it evolves.

Perplexity: My Experience and What the Research Says

Why I Tried Perplexity

I first heard about Perplexity from a friend who read about it in an article—this was about a year and a half ago. I still have it on my phone, even though I haven’t used it in quite a while. When I first started, I actually used Perplexity a fair amount to search for things, and I found it helpful for research. I tried using it for writing, but it felt different from platforms like ChatGPT, Claude, or Gemini. While it does have a text box for chatting, it quickly opens up a web interface and displays results more like a search engine than a conversation partner.

Because of the way it works, Perplexity never really worked out for my writing, and honestly, I don’t think that’s what it’s designed for—at least it wasn’t when I first tried it. I’ve heard you can use it for writing, but in my experience, it’s best suited as a research tool. One thing I noticed is that Perplexity was quite accessible when I used it, which is always a plus. But since I can easily search for information using other AI tools like ChatGPT, Gemini, or Copilot, I haven’t really had a reason to go back to Perplexity.

Since my experience is a bit dated, let’s see what the latest research and reviews have to say about Perplexity—how people are using it, any standout features, and whether it’s changed since I first tried it.

What the Research Shows

  • AI-powered search innovation: Perplexity has become known for its “Deep Research” feature, which conducts slow, thorough searches (takes about 2–4 minutes) to synthesize answers and create citations or detailed reports—now available even to free users. Learn more about Deep Research
  • Next-gen browsing (Comet): The new AI browser “Comet,” built by Perplexity, embeds an assistant directly in a Chromium-based browser to help summarize pages, manage tabs, and guide tasks. Currently in beta to Pro Max users. Read about Comet
  • Ad model & monetization: Perplexity has begun testing conversational ads (“sponsored follow-ups”) and a perks program. So far, ad revenue is minimal (under 0.1%) and limited to U.S. Pro users. Business Insider: Perplexity’s ad model
  • Legal & copyright concerns: Perplexity is facing multiple lawsuits and legal threats—Dow Jones, NY Post, Wall Street Journal, BBC—all accuse it of scraping and copying content from their sites without permission. NY Post lawsuit BBC legal threat
  • Robots.txt issues: Wired and others reported that Perplexity sometimes ignores “no crawler” settings (robots.txt) when scraping content, which raises ethical concerns. The Verge: AI and search
  • Community feedback: Reddit users generally praise Perplexity for direct citations and fast answers, but note it can miss deeper content and sometimes present superficial info. Tom’s Guide: Perplexity feedback
  • Performance & reliability: One review highlighted Perplexity’s ability to combine access to multiple LLMs (GPT-4 Omni, Claude, Grok, Sonar) and to analyze uploaded documents and images. Another warns that the RAG-centric model can sometimes hallucinate confidently. PCMag review
  • Academic benchmark results: A May 2025 academic study found Perplexity often hallucinates and produces errors in bibliographic citation—similar to Copilot and Claude in that task. Nature: AI citation accuracy study

Accessibility and Usability

When I used Perplexity, it was quite accessible. Its clean design and clear citations seem to hold up for screen-reader use. No major accessibility complaints have surfaced, and tools like Comet aim to reduce clutter in multitasking. That fits my past experience well.

Writing vs. Research

Perplexity shines for research: it feels like using a guided search engine that cites sources clearly—a feature many creative writing AIs don’t offer. But it isn’t built for creating stories or narratives—and it shows. That aligns with my feeling that it doesn’t match the natural flow or creative depth of ChatGPT when writing.

My Experience & Final Thoughts

I appreciate Perplexity for certain tasks—especially deep, citation-rich research and quick fact-checking. Its “Deep Research” and Comet browser are impressive innovations. But legal controversies, occasional ethical crawling practices, and hallucination risks are real concerns for me. For now, I’ll stick with ChatGPT as my primary writing partner and use Perplexity when I need a deeply cited, research-focused tool.

Have You Experienced Perplexity?

If you’ve used Perplexity—greatly, occasionally, or not at all—I’d love to hear what worked for you (and what didn’t). Did you trust its sources? How did it compare for writing vs. research? Let me know in the comments! 😊

Other AI Platforms I Haven’t Used Personally

Before I wrap up with ChatGPT, I want to mention a few other AI platforms that are out there. I haven’t used these myself, but some of you may have, and they’re worth knowing about if you’re curious or want to try something different. Here’s what I found from research and public reviews. If you’ve tried any of these, let me know your experience in the comments!

Pi (Inflection AI)

Pi is designed to be a friendly, supportive personal AI for conversation and wellness. Many users enjoy its gentle, empathetic tone and find it helpful for talking through emotions or getting supportive responses. However, Pi is not intended for research or creative writing—it avoids controversy, doesn’t generate long-form content, and can feel limited if you want deep answers or technical help. Learn more about Pi

YouChat (You.com)

YouChat is the chat-based AI built into the You.com search engine. It combines web search with a chatbot format and is often praised for fast answers and direct links to sources. However, its writing quality is fairly basic, and it’s mostly used for quick research or fact-checking—not in-depth writing or conversation. Explore YouChat

Mistral (Le Chat)

Mistral AI is a European company making open-source large language models, with a consumer chatbot called Le Chat. It’s mostly used by tech-savvy users and developers, and is not as widely available or accessible as the major U.S.-based chatbots. Some users praise its openness and transparency, but there are fewer accessibility or mainstream writing tools. Try Le Chat

Reka AI

Reka is an advanced AI assistant mostly used in business, enterprise, and research settings. It can handle technical and scientific queries, but is not as accessible or user-friendly for everyday personal writing or creative work. More about Reka AI

Meta AI (Web/Threads integration)

Besides Facebook/Instagram Messenger, Meta is integrating its AI into Threads, the Meta web platform, and even AR devices. These features are rolling out slowly and not everyone will have access. They are mostly aimed at quick Q&A, image generation, or helping navigate Meta’s social spaces, but not for creative writing.

No major controversies: As of mid-2025, these platforms have not been involved in the kind of headline-making scandals seen with Grok or Gemini image generation. However, some (like YouChat and Mistral) have raised questions about data privacy and transparency, especially for European users.

ChatGPT (OpenAI): My Experience and What the Research Says

Why I Use ChatGPT Every Day

Out of all the AI platforms I’ve tried, ChatGPT is the one I use every single day—often multiple times a day. In fact, I can’t think of many days I haven’t used it for something, unless I wasn’t near my phone. I first discovered ChatGPT in April or May of 2023, after hearing about it from friends on Mastodon—a social platform that became popular among many blind users and others who left Twitter (now X). Learn more about Mastodon

ChatGPT itself first launched publicly in November 2022, but it was still new and evolving when I tried it out. I downloaded the app, started playing with it, and immediately saw how it could help me with my writing. Like most new technologies, it had a few quirks at first—sometimes trying to write the story for me instead of just helping. (I know a lot of people enjoy having AI generate stories for them, but that’s never been what I wanted. I like to create the stories myself; I just want help making them better!) Even in those early days, I found ChatGPT exciting, and as OpenAI updated the models, the writing got even better and the experience improved.

Over time, ChatGPT added features like memory, which helps the AI remember your preferences and details about your stories or characters. (The memory feature is still considered experimental, and there was a brief period when my saved memories disappeared due to an update glitch. Thankfully, it hasn’t happened since, but it’s always wise to keep your own notes just in case.) More about ChatGPT memory

One of the newer features I find incredibly useful is the voice and vision capability: you can use ChatGPT to describe what’s in a photo or even help you find something with your phone’s camera. For example, one morning when my fiancé, Josh, dropped the toothpaste and couldn’t find it, I used ChatGPT’s voice/camera feature to search the bathroom floor. The AI spotted it—hiding under the shower chair! It’s also handy for reading thermostats, labels, or anything visual that’s hard to access if you’re blind. This feature rolled out in late 2023 and is still being improved.

There are also partnerships—like with the Be My Eyes app—where ChatGPT’s vision capabilities help describe photos, documents, and more for blind and low-vision users. Learn more about Be My Eyes

I started paying for ChatGPT Plus not long after I discovered it, and it’s easily been worth it for me (and now for Josh too—we both use it daily). Whether I’m writing, editing, researching, formatting HTML for my blog, or just learning something new, ChatGPT has become an essential tool. I especially appreciate that when I’ve reported accessibility issues, OpenAI has responded and fixed them fairly quickly.

Like any tech, it’s had occasional hiccups: sometimes the app or send button wouldn’t work well with VoiceOver, but there was always a workaround—using the web version or alternate gestures. OpenAI’s team was receptive to feedback, and the problems didn’t last long.

All in all, ChatGPT is by far the best AI writing tool and accessibility companion I’ve found. Now, let’s see what broader research and user experience says about ChatGPT—its strengths, controversies, accessibility, and reputation.

What the Research Shows

  • Leading AI for writing and creativity: ChatGPT (especially the Plus version with GPT-4 or GPT-4o) consistently ranks as the top platform for writing support, brainstorming, editing, and creative assistance. Its conversational flow and nuanced tone are industry benchmarks. PCMag: ChatGPT review
  • Memory and personalization: The memory feature allows ChatGPT to remember user preferences and prior context, making interactions smoother for ongoing writing projects. It’s still officially “experimental,” and OpenAI encourages users to back up important info. OpenAI: About memory
  • Vision and voice features: With the introduction of vision (describe image, read text in pictures) and voice chat, ChatGPT has become a more comprehensive tool for blind and low-vision users. Many report it as more flexible than Google Lens, especially for real-time guidance. The Verge: ChatGPT voice and vision
  • Accessibility: ChatGPT is widely used by blind and visually impaired users, especially with VoiceOver (iOS), TalkBack (Android), and screen readers on desktop. Most accessibility issues are fixed quickly after updates, and OpenAI has sought community feedback. AppleVis: ChatGPT accessibility reviews
  • Privacy and safety: OpenAI has clear privacy policies, including options to turn off chat history, delete data, or control memory. Still, like all major AI tools, it’s important to review privacy options if you’re concerned about data. OpenAI: Privacy Policy
  • Controversies and public scrutiny: ChatGPT and OpenAI have faced criticism and investigation over data use, copyright, and content moderation. (For example, there are ongoing debates over AI-generated content and its use in education, journalism, and creative industries.) No platform is perfect, but ChatGPT is generally seen as more responsive and transparent than most competitors.
    NPR: ChatGPT controversy

Accessibility and Real-World Use

For me and many others, ChatGPT stands out for its accessibility and willingness to fix issues. VoiceOver support is strong, navigation is generally smooth, and the team responds to user feedback. Occasional glitches happen, but there’s almost always a workaround—and the web version stays accessible if the app ever isn’t.

Writing Quality & Everyday Impact

ChatGPT is my go-to for writing, editing, research, brainstorming, and even HTML formatting. The creative partnership and customization it offers have changed the way I write and blog. Other platforms do some things well, but nothing matches ChatGPT for my needs.

My Experience & Final Thoughts

After all this time, I’m still using ChatGPT daily—sometimes for big writing projects, sometimes just to look up information or solve a quick accessibility puzzle. I recommend it to anyone who wants an AI that truly adapts to their style and needs. If you haven’t tried it, I think it’s well worth a look.

Final Thoughts on Responsible AI

AI is still a very new technology. ChatGPT only launched for the public at the end of November 2022—so as of this winter, it will have been available for just three years. Of course, a lot of research and development went into it before it was released, but the technology itself is still finding its way. Like any new tool, AI is going to have its controversies, and there will always be growing pains as people, companies, and communities figure out how to use it responsibly.

I honestly believe that we have to hold AI to a high standard, even as it evolves. It’s important that platforms like ChatGPT, Gemini, Claude, and others are careful with sensitive topics, privacy, and accessibility—because these things matter to real people. Sometimes I’ll get a content warning when writing about a difficult subject or something from my past. In the beginning, those warnings seemed to pop up more often, even when I was just trying to talk honestly about an experience. The filters have improved, and now ChatGPT can usually tell the difference between a real need for support and something that crosses a line.

That balance isn’t always easy, but I’d rather AI be careful—even if it means the occasional frustrating filter—than have it overlook someone’s safety or wellbeing. In my experience, ChatGPT is getting better at this, but all AIs are still learning. I hope they’ll keep listening to users and working to find that balance between openness, helpfulness, and responsible use.

Ultimately, AI is a tool. It’s up to us—users, developers, and communities—to make sure it’s used responsibly, with respect for creativity, privacy, accessibility, and each other. If we keep working toward that, I think the future of AI will be a bright one.

Before I wrap up, I want to be transparent: I used ChatGPT not only to help me write this post, but also to format the HTML and do much of the research. That’s one of the things I love most about ChatGPT—it’s not just a writing tool, but a research partner. If I’m unsure about a fact, I’ll ask it to look it up, and it’s usually honest about what it finds (or doesn’t find). Like any AI, it can still “hallucinate” and make mistakes, but I’ve learned to double-check, especially when the information is new or time-sensitive.

ChatGPT will even tell me about its own controversies, criticisms, or limitations, because that’s what responsible AI should do. I believe that’s the standard we should expect: tools that help us find the truth, not just what we want to hear. If you’re using AI—whether for writing, research, or just learning something new—don’t be afraid to ask questions, check sources, and look for honest answers. None of us benefit from wishful thinking or misinformation.

Thanks for reading! If you’ve tried any of these platforms, have thoughts about responsible AI, or want to share your own experience, I’d love to hear from you in the comments. Let’s keep learning together!

Vicki Andrada's avatar

By Vicki Andrada

A Little About Me I was born on February 25, 1972, in Flint, Michigan, at McLaren Hospital. I lived in Michigan until I was almost 40, then moved to Tampa, Florida, where I stayed for seven years. After that, I relocated to Arizona, living with friends in Glendale and then in Phoenix for about eight months. I spent two years total in Arizona before returning to Florida for a little over a year. Eventually, I moved back to Michigan and stayed with my parents for six months. In May of 2022, I moved to Traverse City, Michigan, where I’ve been ever since—and I absolutely love it. I never expected to return to Michigan, but I’m so glad I did. I was born blind and see only light and shadows. My fiancé, Josh, is also blind. We both use guide dogs to navigate independently and safely. My current Leader Dog is Vicki Jo , a four-year-old Golden Retriever/Black Lab mix. She’s my fourth guide dog—my first two were Yellow Labs, and my last two have been Golden/Lab crosses. Josh’s guide dog, Lou, came from the same organization where I got my previous dog—now known as Guide Dogs Inc., formerly Southeastern Guide Dogs. Josh and I live together here in Traverse City, and we both sing in the choir at Mission Hill Church , which was previously known as First Congregational Church. A lot of people still know it by that name. We both really enjoy being part of the choir—it’s something that brings us a lot of joy. I also love to read, write, and listen to music—especially 60s, 70s, and 80s music. Josh and I enjoy listening to music together and watching movies, especially when descriptive video is available. We also like working out at the YMCA a couple of times a week, which has been great for both our physical and mental health. I’m a big fan of Major League Baseball. My favorite team is the Detroit Tigers, followed by the Tampa Bay Rays and the Colorado Rockies. In the NFL, I cheer for the Pittsburgh Steelers, Indianapolis Colts, and San Francisco 49ers—and I still have a soft spot for the Detroit Lions, especially now that they’ve started turning things around. I’m passionate about politics and history. I consider myself a progressive thinker, though I also try to take a balanced, middle-of-the-road approach. I’m a follower of Jesus Christ and a strong believer in respecting people of all faiths. I love learning about different religions, cultures, and belief systems. Writing is one of my biggest passions. I haven’t published anything yet, but I’ve written several books that are still in progress. Writing helps me express myself, explore new ideas, and connect with others through storytelling. Thanks for stopping by and getting to know a little about me.

Leave a comment