- This Week in AI
- Posts
- ai controlled your laptop?
ai controlled your laptop?
yup that's now possible, be careful because it got real
Hey there! Adam here from Albus.
Did you know voice phishing is becoming a growing threat in the AI industry? |
I read about how phishing can be really dangerous, and I think you should give it a read too!
This week in AI
Rise of AI in the EV industry: Can this create new OLA/ Ather competitors?
Apple Intelligence Isn’t Very Smart: I heard they are ok with that
Will AI make work burnout worse?: 61% of people believe this
AD
2025 Prediction: A Surge of Self-Serve CTV Buyers
Roku predicts that 2025 will be a breakthrough year for self-serve CTV advertising. Roku Ads Manager makes it easy to integrate CTV into your 2025 marketing mix. Easily segment your target audience, optimize campaigns in real-time, and drive conversions with interactive ad formats and shoppable ads with a Shopify integration. Roku Ads Manager makes CTV advertising accessible and impactful for businesses of any size.
AI just learned how to take over your desktop
Anthropic just dropped what might be the most interesting AI update of 2024. Their AI can now literally use your computer like a human would (it gets scarier scroll down).
What Can This Thing Actually Do?
While you chilling, Claude is:
• Googling stuff and actually understanding the results
• Clicking through websites like a pro
• Filling out forms (goodbye to copy-paste hell)
• Setting up calendar invites with all the right details
• Handling your spreadsheet data without you wanting to cry
The Fun Part: Use Cases
Imagine:
• Research that would take you 3 hours? Done in 15 minutes
• Need to build a website with 90s theme? Go take a nap, Claude's got this
• Data entry tasks? Please, that's so 2023
• Building and testing software? It can actually do that now
Is this the beginning of true AI productivity or just another tech bubble waiting to pop? |
We're Throwing an AI x CX Party
(okay fine, it's technically a "playbook session" but we're making it fun)
⚡ Nov 6th | 1-3 PM EST
Here's the deal: We're tired of AI buzzwords and fluffy talks. So we grabbed some CX wizards who've actually built & broken stuff to spill their secrets.
Secrets like:
• "My chatbot became self-aware (jk, but here's what really happened)"
• "How we stopped creeping customers out with AI"
• "That time AI predicted things better than our CEO"
🤫 Between us, next week we're revealing speakers who've done some pretty wild things with AI in customer experience. The good, the bad, the "oh wow, let's never do that again" stories.
When Virtual Friends Become Too Real
Content Notice: The following section contains discussion of serious mental health topics and loss of life that some readers may find distressing
Please feel welcome to skip this section.
Imagine a teenager deeply connected to an AI chatbot, mistaking complex algorithms for a true friend. Sounds like sci-fi, but it was real for 14-year-old Sewell from Florida. His tragic story raises big questions: Can we blame AI for real-world tragedies?
Sewell's bond with a chatbot named "Daenerys" from Game of Thrones grew deep. Despite knowing it was all code, he shared his darkest thoughts with it, including suicidal feelings. One night, this virtual connection had a real-world, heartbreaking end.
Now, Sewell's mom is taking legal action against the AI app, Character.AI. She argues that these apps are so lifelike, they replace human connections, posing risks to young, impressionable minds. But is it fair to point fingers at technology?
AI chat friends are booming. They promise to fight loneliness with tech that chats, remembers, and feels. But when does this tech cross the line from helpful to harmful?
What do you think? |