Radar trends to watch: June 2021

Trends in AI, Security, Programming, and More

By Mike Loukides
June 1, 2021

The most fascinating idea this month is POET, a self-enclosed system in which bots that are part of the system overcome obstacles that are generated by the system. It’s a learning feedback loop that might conceivably be a route to much more powerful AI, if not general intelligence.

It’s also worth noting the large number of entries under security. Of course, security is a field lots of people talk about, but nobody ends up doing much. Is the attack against the Colonial pipeline going to change anything? We’ll see. And there’s one trend that’s notably absent. I didn’t include anything on cryptocurrency. That’s because, as far as I can tell, there’s no new technology; just a spike (and collapse) in the prices of the major currencies. If anything, it demonstrates how easily these currencies are manipulated.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more


  • Using AI to create AI: POET is a completely automated virtual world in which software bots learn to navigate an obstacle course.  The navigation problems themselves are created by the world, in response to its evaluation of the robots’ performance. It’s a closed loop. Is it evolving towards general intelligence?
  • IBM is working on using AI to write software, focusing on code translation (e.g., COBOL to Java). They have released CodeNet, a database of 14 million samples of source code in many different programming languages. CodeNet is designed to train deep learning systems for software development tasks. Microsoft is getting into the game, too, with GPT-3.
  • Vertex AI is a “managed machine learning platform” that includes most of the tools developers need to train, deploy, and maintain models in an automated way. It claims to reduce the amount of code developers need to write by 80%.
  • Google announces LaMDA, a natural language model at GPT-3 scale that was trained specifically on dialog. Because it was trained in dialog rather than unrelated text, it can participate more naturally in conversations and appears to have a sense of context.
  • Automated data cleaning is a trend we started watching a few years ago with Snorkel. Now MIT has developed a tool that uses probabilistic programming to fix errors and omissions in data tables.
  • AI is becoming an important tool in product development, supplementing and extending the work of engineers designing complex systems. This may lead to a revolution in CAD tools that can predict and optimize a design’s performance.
  • Designing distrust into AI systems: Ayanna Howard is researching the trust people place in AI systems, and unsurprisingly, finding that people trust AI systems too much. Tesla accidents are only one symptom. How do you build systems that are designed not to be perceived as trustworthy?
  • Important lessons in language equity: While automated translation is often seen as a quick cure for supporting non-English speaking ethnic groups, low quality automated translations are a problem for medical care, voting, and many other systems. It is also hard to identify misinformation when posts are translated badly, leaving minorities vulnerable.
  • Andrew Ng has been talking about the difference between putting AI into production and getting it to work in the lab. That’s the biggest hurdle AI faces on the road to more widespread adoption. We’ve been saying for some time that it’s the unacknowledged elephant in the room.
  • According to The New Stack, the time needed to deploy a model has increased year over year, and at 38% of the companies surveyed, data scientists spend over half of their time in deployment. These numbers increase with the number of models.


  • Collective data rights are central to privacy, and are rarely discussed. It’s easy, but misleading, to focus discussions on individual privacy, but the real problems and harms stem from group data. Whether Amazon knows your shoe size doesn’t really matter; what does matter is whether they can predict what large groups want, and force other vendors out of the market.
  • Mike Driscoll has been talking about the stack for Operational Intelligence. OI isn’t the same as BI; it’s about a real time understanding of the infrastructure that makes the business work, rather than day to day understanding of sales data and other financial metrics.
  • Deploying databases within containerized applications has long been difficult. DataStax and other companies have been evolving databases to work well inside containers. This article is  primarily about Cassandra and K8ssandra, but as applications move into the cloud, all databases will need to change.


  • Software developers are beginning to think seriously about making software sustainable. Microsoft, Accenture, Github, and Thoughtworks have created the Green Software Foundation, which is dedicated to reducing the carbon footprint required to build and run software. O’Reilly Media will be running an online conversation about cloud providers and sustainability.
  • Google has released a new open source operating system, Fuchsia, currently used only in its Home Hub.  Fuchsia is one of the few recent operating systems that isn’t Linux-based. Application programming is based on Flutter, and the OS is designed to be “invisible.”
  • A service mesh without proxies is a big step forward for building applications with microservices; it simplifies one of the most difficult aspects of coordinating services that are working together.
  • As much as they hate the term, unqork may be a serious contender for enterprise low-code. They are less interested in democratization and “citizen developers” than making the professional software developers more efficient.
  • The evolution of JAMstack: distributed rendering, streaming updates, and extending collaboration to non-developers.
  • Grain is a new programming language designed to target Web Assembly (wasm). It is strongly typed and, while not strictly functional, has a number of features from functional languages.
  • Grafar and Observable Plot are new JavaScript libraries for browser-based data visualization. Observable Plot was created by Mike Bostock, the author of the widely used D3 library.


  • Morpheus is a microprocessor that randomly changes its architecture to foil attackers: This is a fascinating idea. In a 3-month long trial, 525 attackers were unable to crack it.
  • Self-sovereign identity combines decentralized identifiers with verifiable credentials that can be stored on devices. Credentials are answers to yes/no questions (for example, has the user been vaccinated for COVID-19).
  • A WiFi attack (now patched) against Teslas via the infotainment system doesn’t yield control of the car, but can take over everything that the infotainment system controls, including opening doors and changing seat positions. Clearly the infotainment system controls too much. Other auto makers are believed to use the same software in their cars.
  • Passphrases offer better protection than complex passwords with complex rules. This has been widely known for several years now. The important question is why companies aren’t doing anything about it. We know all too well that passwords are ineffective, and that forcing users to change passwords is an anti-pattern.
  • Fawkes and other tools for defeating face recognition work by adding small perturbations that confuse the algorithms. For the moment, at least. Face recognition systems already appear to be catching up.
  • Tracking phishing sites has always been a problem. Phish.report is a new service for reporting phishing sites, and notifying services that flag phishing sites.

Web and Social Media

  • Ben Evans has a great discussion of online advertising and customer acquisition in a post-Cookie world.
  • Models from epidemiology and the spread of viruses can be used to understand the spread of misinformation. The way disease spreads and the way misinformation spreads turn out to be surprisingly similar.
  • Google brings back RSS in Chrome? The implementation sounds awkward, and there have always been decent RSS readers around. But Google has clearly decided that they can’t kill it off–or that they don’t want web publishing to become even more centralized.
  • Video editing is exploding: YouTube has made that old news.  But it’s set to explode again, with new tools, new users, and increased desire for professional quality video on social media.
  • New York has passed a law requiring ISPs to provide broadband to poor families for $15/month. This provides 25 Mbps downloads; low income households can get high speed broadband for $20/month.


  • Google, Apple, and Amazon back Matter, a standard for interoperability between smart home devices. A standard for interoperability is important, because nobody wants a “smart phone” where every appliance, from individual light bulbs to the media players, requires a separate app.
  • Moore’s law isn’t dead yet: IBM has developed 2 nanometer chip technology; the best widely used technology is currently 7nm. This technology promises lower power consumption and faster speeds.
  • Google plans to build a commercially viable error-corrected quantum computer by 2029. Error correction is the hard part. That will require on the order of 1 million physical qbits; current quantum computers have under 100 qbits.


  • The photo is really in bad taste, but researchers have developed a medical sensor chip so small that Bill Gates could actually put it into your vaccine! It’s powered by ultrasound, and uses ultrasound to transmit data.
  • With sensors implanted in his brain, a paralyzed man was able to “type” by imagining writing. AI decoded signals in his brain related to the intention to write (not the actual signals to his muscles). He was able to “type” at roughly 15 words per minute with a 5% error rate.

Post topics: Radar Trends
Post tags: Signals