Prompt transformation and the Gemini debacle
A term I recently learned, “prompt transformation”, describes how prompts are manipulated under the hood to provide more diverse results.
Wired falls for bogus claims of AI writing detector startups
Wired is pushing the false idea that AI writing detectors actually work.
Latest overblown AI claim: GPT-4 achieves a perfect score on questions representing the entire MIT Math, EE, and CS curriculum
The paper, released on arXiv two days ago, is getting a lot of attention. Some issues with it were immediately apparent. Then three MIT students dug up the study’s test set, and discovered things were much worse than they initially appeared.
Should we have a government-funded “public option” for AI?
My reaction to the article “How Artificial Intelligence Can Aid Democracy” by Bruce Schneier, Henry Farrell, and Nathan E. Sanders, published April 21, 2023 by Slate.
Guardrails on large language models, part 4: content moderation
The final post in a four-part series on the guardrails on large language models.
Guardrails on large language models, part 3: prompt design
The third in a four-part series of posts about the guardrails on large language models.
Koko, ChatGPT, and the outrage over corporate experimentation
Mental health service Koko sparked outrage by announcing it experimented with ChatGPT for mental health support, apparently without informing users. (It turned out users were informed all along, and the CEO’s Twitter thread was just really confusing.)
Here, I dig into the outrage and argue that much of it was focused on the wrong issue: the ethics of corporate experiments.
Why I don’t think ChatGPT is going to replace search anytime soon
There’s been a lot of breathless coverage of ChatGPT in the past week. One comment I keep seeing on social media is that it’s going to replace Google and other search engines. I don’t think that’s going to happen anytime soon, and here’s why.
Class-action lawsuit filed over copyright and privacy issues stemming from GitHub Copilot
Last week I posted about the copyright and privacy risks associated with large language models. One of the examples I discussed was GitHub Copilot, the code-writing assistant based on OpenAI's Codex model. One of the key problems with Copilot relates to code licensing. Today, the issue headed to court.
Large language models can steal work and spill secrets. Here’s why we should care.
Large language models are trained on massive datasets of web-scraped data. They memorize some of it, and can regurgitate it verbatim – including personal data and copyrighted material. Is that a problem?
Large language models are vulnerable to “prompt injection” attacks
Large language models are vulnerable to a newly discovered kind of adversarial attack, known as “prompt injection, ” in which users trick the model into disregarding its designer’s instructions.