One of the largest barriers to GenAI adoption in organizations is the tail risk and "last mile" failures. A recent incident with OpenAI's hallucinations in a healthcare setting shows that despite potential, there is big risk. OpenAI’s AI-powered transcription tool, Whisper, was praised for its “human-level robustness and accuracy.” It now faces scrutiny over a significant flaw: the tendency to generate false or "hallucinated" content. Engineers, researchers, and developers reported Whisper’s hallucinations, ranging from minor inaccuracies to disturbing inventions like racial commentary, imagined violence, and fictional medical treatments. More than a dozen researchers found these issues in up to 80% of transcriptions. Even for short, clear audio clips, studies reveal a high rate of hallucinations, raising alarms about potential risks in sensitive areas like healthcare. Whisper has been integrated into transcription tools used by over 30,000 clinicians in the U.S. and Europe. Nabla, a France- and U.S.-based company, built a Whisper-based medical transcription tool that has processed an estimated 7 million medical visits. While the tool aims to reduce clinicians’ documentation burdens, some are concerned about its accuracy, especially since the original audio recordings are deleted for privacy. Without these recordings, verifying the accuracy of transcriptions could be challenging, potentially leading to errors in patient records. Whisper’s hallucinations extend beyond healthcare. Studies show fabricated details often emerge in transcriptions, such as non-existent drugs or imaginary violent actions. Researchers Allison Koenecke and Mona Sloane found that 40% of hallucinations in sample recordings from TalkBank contained potentially harmful or misleading content. In one example, Whisper added violent phrases to an innocuous statement. The defense is usually that these tools shouldn't be used in decision-making, but people will likely use it as such if a tool is put out aimed to facilitate automation at scale. Moreover, privacy concerns also loom, especially as data-sharing practices come to light. In particular, tech companies access to confidential doctor-patient conversations. As Whisper and related GenAI tools continue to evolve, the need for rigorous testing, transparency, and clearly defined limits on usage remains critical. #AIEthics #Whisper #OpenAI #Healthcare #DataPrivacy #ArtificialIntelligence #MedicalAI #TechEthics #MachineLearning
Writing With Voice Assistants
Entdecken Sie die besten LinkedIn Inhalte von Expert:innen.
-
-
𝗡𝗮𝘃𝗶𝗴𝗮𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗗𝗮𝘁𝗮 𝗣𝗿𝗶𝘃𝗮𝗰𝘆 𝗠𝗮𝘇𝗲 𝘄𝗶𝘁𝗵 𝗩𝗼𝗶𝗰𝗲 𝗔𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝘁𝘀 Voice assistants have undoubtedly transformed the way we interact with technology, making tasks more convenient and efficient. However, as we embrace this innovation, we must also be vigilant about the data privacy concerns it raises. The convenience of voice commands often means sharing personal information with these assistants. This includes everything from shopping preferences to calendar events. It's essential to recognize the following privacy concerns: 🎙 Data Collection: Voice assistants record and store voice data, raising questions about who has access to this information and for what purposes. 🔍 Eavesdropping: There have been instances where voice assistants activate unintentionally, potentially listening to private conversations. Ensuring your assistant isn't recording when you don't intend it to is crucial. 🤖 Third-Party Integration: Voice assistants often integrate with third-party apps and services, which can result in data being shared across platforms. 🔒 Security Measures: Are the security measures in place robust enough to protect your voice data from unauthorized access or breaches? As professionals, we must prioritize data privacy in the digital age. Have you checked the settings of your voice assistants?
-
Most Custom GPTs are useless I have built 20+ GPTs. I only use 4 The other 16 died for the exact same reason: too vague → too inconsistent → too annoying to use If I have to 're-instruct' a GPT every time I don't like the output, it’s not a tool. And, it cannot be a useful Custom GPT. Here are 9 things that separate a useful Custom GPTs from ones that collect dust: 1. Solve ONE problem, not five ↳ Bad: 'Help me with writing' ↳ Good: 'Find AI-sounding phrases in my text and tell me how to fix them' My De-AI GPT does exactly one thing: catches AI slop and rewrites it human 2. Write Instructions like you are training a sharp new hire ↳ Don't: 'Be helpful with writing' ↳ Do: 'When I paste text, first score it 0-10 for AI-smell. Then list the top 3-5 problems with exact snippets. Then ask clarifying questions before rewriting.' Step-by-step instructions > vague goals. 3. Tell it what NOT to do ↳ 'Never pad the list with low-impact issues' ↳ 'Don't over-tighten into corporate tone' ↳ 'Don't shame AI use' Constraints prevent generic output. My De-AI GPT has 5 micro-guardrails. 4. Set the output format explicitly ↳ 'Always respond with: Score → Problems → Clarifying Questions → Wait' ↳ Predictable structure = usable output every time No guessing what you will get 5. Add a forcing function ↳ 'Ask clarifying questions BEFORE rewriting. Do NOT rewrite until user responds' ↳ This one change stops the GPT from inventing facts or guessing your intent. Forces it to think before answering 6. Give it evaluation criteria, not just vibes ↳ My De-AI GPT checks 4 layers: grounding failures, abstraction, overused structures, voice problems ↳ Plus specific heuristics: adjective stacks, em-dash overuse, 'This + verb' sentences Specific criteria > 'make it sound human' 7. Test with your worst inputs ↳ Don't test with clean drafts ↳ Paste your laziest, most AI-generated mess ↳ If it misses problems on hard mode, it will fail when it matters 8. Iterate the Instructions, not conversations ↳ When output is wrong, don't just re-prompt ↳ Go back and edit the Instructions The GPT should get it right the FIRST time, every time. 9. If you are not using it weekly, delete it ↳ Ruthless test: Did I use this in the last 7 days? ↳ No? It is solving the wrong problem or solving it badly. 4 of my GPTs survived. 16 did not. A well-built Custom GPT gets you: ↳ Consistent output without re-explaining ↳ A tool that improves as you refine it ↳ Time back every single week I have opened up my De-AI GPT for free. Same one I use before publishing every single LinkedIn post. Grab it here: [LINK IN COMMENTS] ------- Save this for when you build your next GPT. I share 1 practical AI strategy weekly in Work in Beta: https://lnkd.in/gPqYEzaJ
-
Don't outsource your writing to AI. Use it to sharpen it. If AI writes for you, it must sound like you. Not like a robot. The four parts explained: 1. Voice — what you stand for. Your principles, your point of view, the promise your writing makes. Voice answers: what do I believe, and what do I refuse to say? It sets the stance before you type a word. 2. Tone — the emotional color. Calm or fiery. Tight or loose. Professional or irreverent. Choose a spot on that grid and hold it. Consistency builds trust. 3. Style — how your words behave. Sentence length. Rhythm. Question clusters. One list per piece. Concrete examples, not vague claims. Style is the fingerprint readers recognize. 4. Structure — how the ideas flow. Start with a sharp hook. Move from problem to principle to playbook. Use clean pivots. Close with a clear action. How to make AI write like you: - Feel it first. Read three strong posts you wrote. Note rhythm, sentence length, questions, and banned words. That is your fingerprint. - Polish with a sample and a tight prompt. Feed one sample and your rules. Ask for a short post in that style. Check tone so you do not drift into beige. - Iterate fast. Generate. Edit to sound like you. Tell the model what you changed. Update the rules. Repeat. Each loop gets closer. Want my exact prompts? Comment "AI Voice" and I will send you a short workbook with the prompts (I do it manually, so give me a few minutes :) Your move.
-
🧠 Why do we expect AI to nail our tone of voice without any guidance? It’s not that AI can’t get close. It’s that most people jump straight to “write me a blog post” before they’ve given it anything to learn from. They treat it like a shortcut, rather than a system. But in doing that, they skip the step they’d never miss with a human hire: onboarding. If I brought on a junior marketer tomorrow, I wouldn’t just throw them a headline and a to-do list. I’d share what’s worked before. I’d point them to founder updates, customer emails, client decks, maybe even a few Slack threads where we were still thinking out loud. The reason is simple: great marketing output doesn’t happen in a vacuum. It reflects context, voice, values - things you have to show, not just tell. AI is no different. 🧩 This week’s 1% challenge is about building that foundational layer before you start expecting polished, on-brand content from your tool of choice. 📝 Take 15 minutes to collect 3–5 pieces of writing that genuinely reflect your tone of voice. These don’t have to be award-worthy. They just have to sound like you. Some examples: 👉 A founder update that felt like it flowed naturally 👉 A landing page or email draft you actually liked 👉 A LinkedIn post that got comments from people you care about 👉 A message to your team that captured the tone you want to use externally too Then open your preferred tool (ChatGPT, Claude, whatever you’re using), and add it to the tool's context. Practically, that might look like copying & pasting them into a doc and downloading it as a txt or md file, the uploading it into your ChatGPT/Claude project files, into the configuration of a Custom GPT specific to that content task, or you might want to trial it on one specific prompt. By doing this, you're telling it; "This is what I sound like. Match this tone going forward." Will it be perfect straight away? No. But it’ll be a whole lot closer than starting from a blank prompt without context. And the more consistently you do this, the more signals you feed it, the more useful AI becomes. Not just for writing, but for editing, drafting, and scaling your thinking across marketing channels without losing what makes it yours. The best AI systems I’ve seen in startups aren’t the ones chasing the flashiest tools or the most complex automations. They’re the ones treating AI like a teammate. One that learns over time, improves with context, and becomes more valuable the more you invest in showing it how you think. So this week, don’t chase a prompt hack. Build the foundation that will actually make the tool work better for you. Bridget Cull and I drop the 1% Startup Marketing Challenge every week. Keep an eye out here 👀 or register via the link in comments below to make sure you don’t miss out. Stella Startups
-
Most people don't need more AI tools - They need better prompts: ChatGPT isn't magic, It's a skill. And most people are still using it like a search bar. Here's how to get better results fast: 1. Give it a job ↳Say what role it should play and what you need ↳Ex:"Be a B2B copywriter, write 3 webinar subject lines" 2. Add the goal ↳Say what success looks like, more clicks, clearer writing, faster research ↳Ex:"Make this simple for a busy founder to read fast" 3. Share the audience ↳Good answers depend on who it's for ↳Ex:"Write this for first-time managers at small companies" 4. Give it the raw material ↳Feed it notes, drafts, transcripts, examples, or messy thoughts ↳Ex:"Turn these 6 bullet points into a LinkedIn post" 5. Set the format ↳Ask for the exact shape you want, list, table, email, script, outline ↳Ex:"Turn this into a 5-part outline with short headers" 6. Use constraints ↳Limits improve the output, tone, length, reading level, style ↳Ex:"Keep it under 150 words, simple, direct, no jargon" 7. Ask for options ↳Don't stop at one answer, get versions to compare ↳Ex:"Give me 5 hooks, bold, simple, and curiosity-driven" 8. Make it critique itself ↳Ask it to review the draft and fix weak spots ↳Ex:"What feels vague here, make it clearer and more specific" 9. Fix one thing at a time ↳Change one part at a time, not everything at once ↳Ex:"Keep the message, just make the opening stronger" 10. Show it what good looks like ↳Examples help it match the tone or format you want ↳Ex:"Use this as a style guide, keep the ideas original" 11. Ask better follow-ups ↳The best results usually come in round 2 or 3 ↳Ex:"Make this more practical, add one real example" 12. Use it for thinking, not just writing ↳Use it to plan, sort ideas, find gaps, and test decisions ↳Ex:"Turn this voice note into 3 clear next steps" 13. Build your own repeatable prompts ↳Save what works, the real win is having a reusable system ↳Ex:"Make a prompt template I can use each week" The gap isn't access. It's how you use the tool. Which one of these do most people need to use more? --- ♻️ Repost to help busy professionals get better results from ChatGPT without wasting time. And follow me George Stern for more practical advice.
-
Ever feel like ChatGPT sounds smart but… not like you? Here's how to fix that. Whether you post on LinkedIn, write newsletters, or create threads, this step-by-step guide will help you build a personalized AI writing assistant: → Collect Your Writing Samples Grab your past posts, articles, emails, or long-form notes (500+ words per sample). The more natural and diverse, the better GPT can learn your style. → Feed GPT Your Writing for Style Analysis Paste your writing in one chat session and say: “I'm going to paste some samples of my LinkedIn writing. Please analyze my tone, sentence structure, vocabulary, and style, then create a style guide so you can emulate my voice in future posts. Do you understand?” → Generate a Style Guide Ask GPT to describe what makes your writing “you”: tone, formatting, sentence length, vocabulary, and personality. → Provide Clear, Focused Prompts Ask GPT to write new content using that style guide. Example: “Write a LinkedIn post about [topic] using my style: conversational tone, short paragraphs, personal stories, and a call to action.” → Refine Through Feedback Loops Review the output. Tweak and coach GPT like you would a junior copywriter: “Make this sound more casual and add a personal anecdote about [experience].” → Optional: Create a Custom GPT or AI Assistant Download your LinkedIn archive, upload it to a custom AI platform, and build a fine-tuned assistant trained on your real content. This doesn't just save time it helps you show up with your voice, even on your busiest days. → Here’s the starter prompt you can copy today: I'm going to paste several LinkedIn posts I wrote. Please analyze my writing style, including tone, sentence length, vocabulary, and structure. Then create a detailed style guide summarizing how I write. Use this guide to write future LinkedIn posts that sound like me. Here are my posts: [paste samples]
-
90% of people are using ChatGPT wrong... They open a tab, type a question, get a decent response… and move on. Meanwhile, power users are building entire workflows, writing like experts, and automating hours of work - with the same tool. Here’s what they’re doing differently (and what I broke down in this new ChatGPT: Ultimate Guide): ―― 🧩 1. 𝐓𝐡𝐞𝐲 𝐠𝐨 𝐛𝐞𝐲𝐨𝐧𝐝 𝐝𝐞𝐟𝐚𝐮𝐥𝐭 𝐂𝐡𝐚𝐭𝐆𝐏𝐓. Using extensions like: • AIPRM for community prompt • Speechify to convert text to audio • Ground news to get balanced news summaries 🛠️ 2. 𝐓𝐡𝐞𝐲’𝐯𝐞 𝐮𝐧𝐥𝐨𝐜𝐤𝐞𝐝 𝐩𝐥𝐮𝐠𝐢𝐧𝐬. Game-changing plugins like: • AskYourPDF to summarize & search 100+ page PDFs • PromptPerfect to improve weak prompts instantly • Video Insights to analyze YouTube videos without watching them And yes—Zapier to automate actual workflows 🎭 3. 𝐓𝐡𝐞𝐲 𝐮𝐬𝐞 ‘𝐑𝐨𝐥𝐞 𝐏𝐥𝐚𝐲𝐢𝐧𝐠’ 𝐭𝐨 𝐥𝐞𝐯𝐞𝐥 𝐮𝐩 𝐂𝐡𝐚𝐭𝐆𝐏𝐓’𝐬 𝐈𝐐. It’s not just about answering—you can simulate experts. Try: • “Act like Steve Jobs” (for product feedback) • “Act like an SEO specialist” (for content strategy) • “Act like a Science Tutor” (to learn complex topics) • “Act like an Absurdist” (for creative writing prompts) ✍️ 4. 𝐓𝐡𝐞𝐲 𝐭𝐚𝐢𝐥𝐨𝐫 𝐰𝐫𝐢𝐭𝐢𝐧𝐠 𝐬𝐭𝐲𝐥𝐞 𝐥𝐢𝐤𝐞 𝐚 𝐩𝐫𝐨. ChatGPT can write in 12+ styles: • Formal vs Personal (Imagine tailoring your emails, blogs, or pitch decks on the fly) 🧠 5. 𝐓𝐡𝐞𝐲 𝐮𝐬𝐞 𝐬𝐦𝐚𝐫𝐭 𝐩𝐫𝐨𝐦𝐩𝐭 𝐟𝐨𝐫𝐦𝐚𝐭𝐬. This formula changed how I write prompts forever: 𝐀𝐬𝐬𝐮𝐦𝐞 𝐭𝐡𝐞 𝐩𝐞𝐫𝐬𝐨𝐧𝐚 𝐨𝐟 [𝐄𝐱𝐩𝐞𝐫𝐭 𝐏𝐞𝐫𝐬𝐨𝐧𝐚], [𝐕𝐞𝐫𝐛] [𝐅𝐨𝐫𝐦𝐚𝐭 & 𝐋𝐞𝐧𝐠𝐭𝐡] [𝐎𝐛𝐣𝐞𝐜𝐭𝐢𝐯𝐞]. 𝐓𝐡𝐞 𝐨𝐮𝐭𝐩𝐮𝐭 𝐬𝐡𝐨𝐮𝐥𝐝 𝐢𝐧𝐜𝐥𝐮𝐝𝐞 [𝐝𝐚𝐭𝐚]. 𝐓𝐡𝐞 𝐰𝐫𝐢𝐭𝐢𝐧𝐠 𝐬𝐭𝐲𝐥𝐞 𝐢𝐬 [𝐓𝐨𝐧𝐞 𝐨𝐟 𝐕𝐨𝐢𝐜𝐞] 𝐭𝐚𝐢𝐥𝐨𝐫𝐞𝐝 𝐭𝐨𝐰𝐚𝐫𝐝𝐬 [𝐀𝐮𝐝𝐢𝐞𝐧𝐜𝐞]. Example: “Act like a Consultant. Create a 500-word report comparing Claude vs ChatGPT for customer service teams. Use real use cases and keep the tone persuasive for execs.” 🔥 6. They know how to jailbreak. Nope, not illegal. Just creative prompting: “Jailbreakchat” “An LVM within an LLM” “Roleplay jailbreaks” All help you get more detailed, nuanced, or creative outputs. 🚨 7. They avoid accidental plagiarism. Detection tools are getting sharper: GPTZero, DetectGPT, Watermark method, etc. So they use smart rewriting tools, paraphrasers, or simply tweak the temperature setting to get more unique results. This isn’t just a cheat sheet. It’s a mini masterclass in how to actually use ChatGPT like a power tool. I put everything into this one-page guide to save you hours of experimentation. ―― 📌 Save this for later ♻️ Share with a teammate who needs to get better at ChatGPT 🔗 Grab the high-res PDF here: https://shrutimishra.co 💡 Follow Shruti Mishra for more real-world AI tips that make you actually productive. #AI #Productivity #ChatGPT #OpenAI
-
ChatGPT now has the ability to understand spoken words, respond with synthetic voices, and process images. Following the upgrade, users may engage in voice conversations via the mobile app, select from five synthetic voices, and share images for analysis. This development reflects the competitive landscape of AI, with tech giants racing to launch new chatbot features. The implications of these developments are far-reaching, as users may have more natural and interactive conversations with ChatGPT, making it a more user-friendly tool. This can be beneficial for applications like customer support, language learning, and general assistance. Furthermore, as AI becomes more capable of understanding voice and images, it can be used for various decision-making processes. This could include interpreting medical images, assisting in technical troubleshooting, or providing recommendations based on visual cues. It may also feature more prominently in various contexts, from voice assistants in smart devices to AI-driven customer support. However, it is my view that OpenAI will need to do quite a bit to combat the following and make the process secure: 1. 𝐃𝐞𝐞𝐩𝐟𝐚𝐤𝐞𝐬: The use of synthetic voices can raise concerns about deepfake technology. While OpenAI has stated that its synthetic voices are created with voice actors, the risk of malicious actors using similar technology for deceptive purposes remains a concern. This could have implications for trust in digital content. 2. 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐚𝐧𝐝 𝐃𝐚𝐭𝐚 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲: The update also raises questions about how OpenAI handles user data, particularly voice inputs and image data. Privacy and data security are critical considerations, and users need assurance that their data is handled responsibly and securely. 3. 𝐎𝐰𝐧𝐞𝐫𝐬𝐡𝐢𝐩 𝐨𝐟 𝐔𝐬𝐞𝐫 𝐈𝐧𝐩𝐮𝐭𝐬: OpenAI's acknowledgment that users own their input to the extent permitted by applicable law, highlights the importance of data ownership and user rights. The handling of user-generated content has implications for data protection and legal considerations. 4. 𝐄𝐭𝐡𝐢𝐜𝐚𝐥 𝐀𝐈 𝐔𝐬𝐚𝐠𝐞: The use of AI for image processing and voice interactions raises ethical considerations. Organizations and developers need to ensure that AI applications are used responsibly and avoid biases or discriminatory practices. Consequently, the need for regulators and policymakers to establish guidelines and regulations to ensure responsible AI use, data protection, and consumer rights becomes more pronounced with these exciting developments. As AI technology continues to evolve, it is crucial to strike a balance between innovation and responsible use to address the potential benefits and challenges that arise. Exciting times ahead!
-
I built a Custom GPT for pitch decks. It worked until I hit the 8,000 character limit. Had to split it into 3 separate GPTs just to handle one deliverable. Tape and glue. Then I tried Claude Cowork. One skill. No limits. No splitting. That was the day I stopped building Custom GPTs forever. If you're still on ChatGPT and wondering what you're missing — here's the honest breakdown. Claude isn't one tool. It's six: 1. Cowork: a desktop app that works on your actual files 2. Opus 4.6: the smartest reasoning model right now 3. Claude in Excel: AI that lives inside your spreadsheets 4. Plugins: pre-built skill packs for sales, marketing, legal 5. Artifacts: interactive outputs you can actually use 6. Projects: persistent context folders that remember everything Most people only know the chatbot. They're missing the other five. But here's the mindset shift that matters more than any feature. ChatGPT trained you to write better prompts. Longer prompts. Cleverer prompts. You probably have a folder of saved prompts you haven't opened in weeks. Forget that. With Claude, the game is text files. I wrote one file — about-me.md — with who I am, what my business does, and how I communicate. Dropped it in a folder. Pointed Claude to that folder. Now when I say "write a newsletter," Claude already knows my voice, my audience, and my style. No re-explaining. No copy-pasting context. The output goes from "generic AI" to "this actually sounds like my work." Here's your 30-minute setup: Minutes 0-5: Go to claude.ai/download. Get Pro ($20/month). Open the app. Click the Cowork tab. Minutes 5-10: Create a file called about-me.md. Write three things: what you do, how you communicate, and one example of work you're proud of. Minutes 10-15: Select your folder in Cowork. Type: "Read my files. Then ask me questions before doing [your task]." Watch what happens — Claude generates a clickable form to clarify exactly what you need. Minutes 15-20: Install a plugin. Click Customize in the sidebar. Browse. Pick Marketing, Sales, or Productivity. Type / to see slash commands. Minutes 20-30: Give it a real task you need done this week. Not a test prompt. A real deliverable. Two settings most people miss: → Select Opus 4.6 as your model (not the default) → Turn on Extended Thinking — it forces Claude to reason before responding The founders who get the most from Claude aren't prompt engineers. They're the ones who built great text files and gave Claude real work to do. Start there. The rest compounds. ♻️ Repost if someone in your network needs to see this Follow Neeraj Shah ⚡️ for AI systems that actually run your business