The Enhancement AI provided to Amazon

Mahima jindal
5 min readOct 19, 2020

What is Artificial Intelligence ?

Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Colloquially, the term “artificial intelligence” is often used to describe machines (or computers) that mimic “cognitive” functions that humans associate with the human mind, such as “learning” and “problem solving”.

As machines become increasingly capable, tasks considered to require “intelligence” are often removed from the definition of AI, a phenomenon known as the AI effect. A quip in Tesler’s Theorem says “AI is whatever hasn’t been done yet.” For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomously operating cars, intelligent routing in content delivery networks, and military simulations.

The easiest way to understand the relationship between artificial intelligence (AI), machine learning, and deep learning is as follows:

  • Think of artificial intelligence as the entire universe of computing technology that exhibits anything remotely resembling human intelligence. AI systems can include anything from an expert system — a problem-solving application that makes decisions based on complex rules or if/then logic.
  • Machine learning is a subset of AI application that learns by itself. It actually reprograms itself, as it digests more data, to perform the specific task it’s designed to perform with increasingly greater accuracy.
  • Deep learning is a subset of machine learning application that teaches itself to perform a specific task with increasingly greater accuracy, without human intervention.

Artificial intelligence applications

💫Speech recognition

Also called speech to text (STT), speech recognition is AI technology that recognizes spoken words and converts them to digitized text.

💫Natural language processing (NLP)

NLP enables a software application, computer, or machine to understand, interpret, and generate human text. NLP is the AI behind digital assistants (such as the aforementioned Siri and Alexa), chatbots, and other text-based virtual assistance.

💫Image recognition (computer vision or machine vision)

AI technology that can identify and classify objects, people, writing, and even actions within still or moving images. Typically driven by deep neural networks, image recognition is used for fingerprint ID systems, mobile check deposit apps, video and medical image analysis, self-driving cars, and much more.

💫Ride-share services

Uber, Lyft, and other ride-share services use artificial intelligence to match up passengers with drivers to minimize wait times and detours, provide reliable ETAs, and even eliminate the need for surge pricing during high-traffic periods.

💫 Virus and spam prevention

Once driven by rule-based expert systems, today’s virus and spam detection software employs deep neural networks that can learn to detect new types of virus and spam as quickly as cybercriminals can dream them up.

💫 Household robots

iRobot’s Roomba vacuum uses artificial intelligence to determine the size of a room, identify and avoid obstacles, and learn the most efficient route for vacuuming a floor.

and many more…

❒Case Study : The Alexa Effect

The flagship product of Amazon’s push into AI is its breakaway smart speaker, the Echo, and the Alexa voice platform that powers it. These projects also sprang from a six-pager, delivered to Bezos in 2011 for an annual planning process called Operational Plan One. One person involved was an executive named Al Lindsay, an Amazonian since 2004, who had been asked to move from his post heading the Prime tech team to help with something totally new. “A low-cost, ubiquitous computer with all its brains in the cloud that you could interact with over voice — you speak to it, it speaks to you,” is how he recalls the vision being described to him.

But building that system — literally an attempt to realize a piece of science fiction, the chatty computer from Star Trek — required a level of artificial intelligence prowess that the company did not have on hand

Because Amazon didn’t have the talent in-house, it used its deep pockets to buy companies with expertise. In September 2011, it snapped up Yap, a speech-to-text company with expertise in translating the spoken word into written language. In January 2012, Amazon bought Evi, a Cambridge, UK, AI company whose software could respond to spoken requests like Siri does. And in January 2013, it bought Ivona, a Polish company specializing in text-to-speech, which provided technology that enabled Echo to talk.

💻Challenges Faced

The trickiest part of the Echo — the problem that forced Amazon to break new ground and in the process lift its machine-learning game in general — was something called far field speech recognition. It involves interpreting voice commands spoken some distance from the microphones, even when they are polluted with ambient noise or other aural detritus.

One challenging factor was that the device couldn’t waste any time cogitating about what you said. It had to send the audio to the cloud and produce an answer quickly enough that it felt like a conversation, and not like those awkward moments when you’re not sure if the person you’re talking to is still breathing. Building a machine-learning system that could understand and respond to conversational queries in noisy conditions required massive amounts of data — lots of examples of the kinds of interactions people would have with their Echos. It wasn’t obvious where Amazon might get such data.

By breaking out Alexa beyond the Echo, the company’s AI culture started to coalesce. Teams across the company began to realize that Alexa could be a useful voice service for their pet projects too. “So all that data and technology comes together, even though we are very big on single-threaded ownership,” Prasad says. First other Amazon products began integrating into Alexa: When you speak into your Alexa device you can access Amazon Music, Prime Video, your personal recommendations from the main shopping website, and other services. Then the technology began hopscotching through other Amazon domains. “Once we had the foundational speech capacity, we were able to bring it to non-Alexa products like Fire TV, voice shopping, the Dash wand for Amazon fresh, and, ultimately, AWS,” Lindsay says.

Another pivotal piece of the company’s transformation clicked into place once millions of customers (Amazon won’t say exactly how many) began using the Echo and the family of other Alexa-powered devices. Amazon started amassing a wealth of data — quite possibly the biggest collection of interactions of any conversation-driven device ever. That data became a powerful lure for potential hires.

As more people used Alexa, Amazon got information that not only made that system perform better but supercharged its own machine-learning tools and platforms — and made the company a hotter destination for machine-learning scientists.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Mahima jindal
Mahima jindal

Responses (1)

Write a response