Should we have a government-funded “public option” for AI?
My reaction to the article “How Artificial Intelligence Can Aid Democracy” by Bruce Schneier, Henry Farrell, and Nathan E. Sanders, published April 21, 2023 by Slate.
Koko, ChatGPT, and the outrage over corporate experimentation
Mental health service Koko sparked outrage by announcing it experimented with ChatGPT for mental health support, apparently without informing users. (It turned out users were informed all along, and the CEO’s Twitter thread was just really confusing.)
Here, I dig into the outrage and argue that much of it was focused on the wrong issue: the ethics of corporate experiments.
Why I don’t think ChatGPT is going to replace search anytime soon
There’s been a lot of breathless coverage of ChatGPT in the past week. One comment I keep seeing on social media is that it’s going to replace Google and other search engines. I don’t think that’s going to happen anytime soon, and here’s why.
Do stock image creators know they're training AI to compete with them?
Recent announcements have revealed that stock image collections are being used to train generative AI. Compared to using web-scraped data, this is less legally risky and potentially more fair to the creators of the images. But I have some nagging concerns about it.
Class-action lawsuit filed over copyright and privacy issues stemming from GitHub Copilot
Last week I posted about the copyright and privacy risks associated with large language models. One of the examples I discussed was GitHub Copilot, the code-writing assistant based on OpenAI's Codex model. One of the key problems with Copilot relates to code licensing. Today, the issue headed to court.
Large language models can steal work and spill secrets. Here’s why we should care.
Large language models are trained on massive datasets of web-scraped data. They memorize some of it, and can regurgitate it verbatim – including personal data and copyrighted material. Is that a problem?