Adversarial images aren’t a problem—they’re an opportunity to explore new ways of interacting with AI.
Mike Loukides is Vice President of Content Strategy for O'Reilly Media, Inc. He's edited many highly regarded books on technical subjects that don't involve Windows programming. He's particularly interested in programming languages, Unix and what passes for Unix these days, and system and network administration. Mike is the author of System Performance Tuning and a coauthor of Unix Power Tools. Most recently, he's been fooling around with data and data analysis, languages like R, Mathematica, and Octave, and thinking about how to make books social. Mike can be reached on Twitter @mikeloukides and on LinkedIn.
We shouldn't ask our AI tools to be fair; instead, we should ask them to be less unfair and be willing to iterate until we see improvement.
We won’t get the chance to worry about artificial general intelligence if we don’t deal with the problems we have in the present.
From data quality to personalization, to customer acquisition and retention, and beyond, AI and ML will shape the customer experience of the future.
Programmers have built great tools for others. It’s time they built some for themselves.
More than anything else, O'Reilly's AI Conference was about making the leap to AI 2.0.
Balancing risk and reward is a necessary tension we'll need to understand as we continue our journey into the age of data.
Why companies are turning to specialized machine learning tools like MLflow.
The toughest bias problems are often the ones you only think you’ve solved.
Machines will need to make ethical decisions, and we will be responsible for those decisions.
The internet itself is a changing context—we’re right to worry about data flows, but we also have to worry about the context changing even when data doesn’t flow.
Radar spots and explores emerging technology themes so organizations can succeed amid constant change.
Mapping the complex forces that are reshaping organizations and changing the employee/employer relationship.
The software industry has demonstrated, all too clearly, what happens when you don’t pay attention to security.
Much like human speech, bird song learning is social; perhaps we'll discover machine learning is social, too.
Consent is the first step toward the ethical use of data, but it's not the last.
We need to do more than automate model building with autoML; we need to automate tasks at every stage of the data pipeline.
Our bad AI could be the best tool we have for understanding how to be better people.
If we’re going to think about the ethics of data and how it’s used, then we have to take into account how data flows.
HTTPS "everywhere" means everywhere—not just the login page, or the page where you accept donations. Everything.