Despite massive progress in recent years, artificial intelligence and machine learning are still fields that are in their infancy. Amazon Web Services wants to change that, and is looking to demystify this domain.
The company took to the pitch in a big data conference in the backyard of some of its largest government customers to showcase its new AI and ML tools — solutions that are helping users to funnel colossally large volumes of data into its storage and computing infrastructure.
As reported, AWS executives addressed a big data conference in Tysons Corner, Virginia, revealing their plans to go beyond democratizing big data to demystifying artificial intelligence and machine learning.
Ben Snively, a solution architect at AWS noted that building and deployment of machine learning models, and eventually full platforms, is a process that consumes a lot of time.
Many never made it to production, and those that do often require up to 18 months to rollout.
There are several reasons of this, but the more notable one is the fact that dirty data must be cleansed to foster access. The cloud giant estimates that 80% of data lakes currently lack metadata management systems that help determine data sources, formats, and other attributes.
All of which are necessary to wrangle big data.
That said, this combination of big data and analytics, when combined with artificial intelligence and machine learning creates what is called the flywheel affect — where organized and accessible data leads to faster insights and better products, but more data as well.
AWS forecasts as much as 180 zettabytes of widely used and fast-moving data by 2025.
This is where tools like SageMaker, a machine and deep learning stack that the company introduced in November, comes into play.
A solution like this helps free data scientists, and opens the pathway for more experimentation among them as these customers seek to connect with big data and machine learning development. This is what will lead them to go beyond the magic box phase.