“Human in the loop” software development will be a big part of the future.
An overview and framework, including tools that can be used to enable automation.
This collection of AI resources will get you up to speed on the basics, best practices, and latest techniques.
The personal robot temi refactors robotic human behaviors we encounter in the “iPhone Slump,” and moves those back to actual robots.
Dave Patterson and other industry leaders discuss how MLPerf will define an entire suite of benchmarks to measure performance of software, hardware, and cloud systems.
MLPerf is a new set of benchmarks compiled by a growing list of industry and academic contributors.
Using machine learning, deep learning, and cognitive computing in concert can help enterprises gain competitive edges.
Get a basic overview of machine learning and then go deeper with recommended resources.
Meihong Wang explains how Facebook thinks about personalization and how the company uses machine learning to provide personalized experiences.
Olga Russakovsky explains how her organization, AI4ALL, aims to increase diversity and inclusion in AI development and research.
George Church discusses the IARPA MICrONS project, which aims to revolutionize machine learning by reverse-engineering the algorithms of the brain.
Ron Bodkin explains what a tensor is and why you should care.
Thomas Reardon offers an overview of brain-machine interface (BMI) technology and shares CTRL-Labs’s transformative and noninvasive neural interface approach.
Dario Gil explores state-of-the-art computing for AI as it exists today as well as an innovation that will lead us into the decades to come: quantum computing for AI.
Abhijit Deshpande explains how to use machine learning to identify root causes of problems in minutes instead of hours.
Kavya Kopparapu shares her inspiration for starting GirlsComputingLeague.
Ben Lorica and Roger Chen discuss the state of reinforcement learning and automation.
Watch highlights covering artificial intelligence, machine learning, automation, and more. From the Artificial Intelligence Conference in New York 2018.
Fiaz Mohammed and Justin Herz discuss how artificial intelligence can improve content discovery and monetization
Zoubin Ghahramani discusses recent advances in artificial intelligence, highlighting research in deep learning, probabilistic programming, Bayesian optimization, and AI for data science.
Dan Mbanga explores how accelerating AI experimentation has influenced innovations such as Amazon Alexa, Prime Air, and Go.
Fiaz Mohamed explains how Intel AI solves today’s business problems.
Manuela Veloso looks at the role humans can play in autonomy-based AI interactions and the underlying challenges to AI.
Mary Beth Ainsworth offers an overview of SAS deep learning and computer vision capabilities that help map wildlife and scale conservation efforts around the world.
Food production needs to double by 2050 to feed the world’s growing population. Jennifer Marsman details a solution that uses sensors in the soil, aerial imagery from drones, and machine learning.
We’re currently laying the foundation for future generations of AI applications, but we aren’t there yet.
Solving the challenges of efficiency, automation, and safety will require cooperation between researchers and engineers spanning both academia and industry.
A few ways to think differently and integrate innovation and AI into your company's altruistic pursuits.
Innovations that increase detection of, and response to, criminal attacks of financial systems.
Our survey reveals how organizations are using tools, techniques, and training to apply AI through deep learning.
The AI Conference in NY will feature tutorials, conference sessions, and executive briefings to help business leaders understand and plan for AI technologies.
Why we're taking the AI Conference to Beijing.
The top 5 ways to immerse yourself in deep learning and MXNet.
Leveraging the potential of AI to gain maximum ROI.
A look at the parallels between human and machine knowledge acquisition.
Opportunities and challenges companies will face integrating and implementing deep learning frameworks.
A step-by-step tutorial to develop an RNN that predicts the probability of a word or character given the previous word or character.
A step-by-step tutorial to build generative models through generative adversarial networks (GANs) to generate a new image from existing images.
A step-by-step tutorial on how to use TensorFlow to build a multi-layered convolutional network.
Deep learning’s effectiveness is often attributed to the ability of neural networks to learn rich representations of data.
How CapsNets can overcome some shortcomings of CNNs, including requiring less training data, preserving image details, and handling ambiguity.
Image recognition and machine learning for mar tech and ad tech.
Using machine learning to understand and leverage text.
Finding anomalies in time series using neural networks.
TensorFlow Lite enriches the mobile experience.
We need a new model for how AI systems and humans interact.
RISE Lab’s Ray platform adds libraries for reinforcement learning and hyperparameter tuning.
Though they are typically applied to vision problems, convolution neural networks can be very effective for some language tasks.
How to use AI as a tool in your business.
Use cases and tips to help businesses take full advantage of AI technology.
How to build a multilayered LSTM network to infer stock market sentiment from social conversation using TensorFlow.
From methods to tools to ethics, Ben Lorica looks at what's in store for artificial intelligence.
Experts weigh in on what we can expect from AI in 2018.
Solving problems with gradient ascent, and training an agent in Doom.
Lessons from FizzBuzz for Apache MXNet.
GANs, one of the biggest breakthroughs in unsupervised learning in recent years, will bring us one step closer to general artificial intelligence.
A look at why the U.S. and China are investing heavily in this new computing stack.
Reduce both experimentation time and training time for neural networks by using many GPU servers.
A glimpse behind the scenes of a high-level deep learning framework.
While open-endedness could be a force for discovering intelligence, it could also be a component of AI itself.