December 5, 2023
We’re continuing to push AI content into other areas, as appropriate. AI is influencing everything, including biology. Perhaps the biggest new trend, though, is the interest that security researchers are taking in AI. Language models present a whole new class of vulnerabilities, and we don’t yet know how to defend against most of them. We’ve known about prompt injection for a time, but SneakyPrompt is a way of tricking language models by composing nonsense words from fragments that are still meaningful to the model. And cross-site prompt injection means putting a hostile prompt into a document and then sharing that document with a victim who is using an AI-augmented editor; the hostile prompt is executed by the victim when they open the document. Those two have already been fixed, but if I know anything about security, that is only the beginning.
- We have seen several automated testing tools for evaluating and testing AI system, including Giskard and Talc.
- Amazon has announced Q, an AI chatbot that is designed for business. They claim that it can use information in your company’s private data, suggesting that it is using the RAG pattern to supplement the model itself.
- Let the context wars begin. Anthropic announces a 200K context window for Claude 2.1, along with a 50% decline in the percentage of false statements (hallucinations). Unlike ...