What is it?
Join us for Innovation@Amazon, a technical conference held by Amazon Development Center Poland in Stary Maneż. We will bring together some of Amazon’s leading engineers from around the world, who will take us on a journey through some of the hottest technical topics of the moment, ranging from Amazon Web Services, Machine Learning, IoT, Amazon Game Studios to Amazon Alexa.
The event will take place in the Stary Manez on September 30th. During coffee and lunch breaks you will have the opportunity to meet and chat with speakers and other participants to ask questions and exchange ideas. If you have more questions for our speakers or want to continue discussions, you are invited to join us for drinks after the conference. The participation is free of charge, however due to a limited number of seats, register as soon as you can.
30 SEP 2017
Meet our most valued speakes
Alexa Brain: Removing Frictions in Natural Language Interactions
There are three fundamental frictions in interacting with the applications and services on computing devices: 1) application/service discovery, 2) learning what these applications can do, and 3) limited information flow into the apps/services. The same set of frictions manifest themselves with Alexa enabled devices as well. For example, users do not know what skills exists to handle their requests and they also do not know how to interact with those skills in a natural way. Currently, Alexa also has limited ability for contextual conversational understanding. Alexa Brain is a collection of new initiatives that targets to solve these problems in a principled and scalable manner. Alexa Brain will enable frictionless conversational interactions with Alexa to serve the customer requests using any 1P domains or 3P skills. Alexa will also use context signals of various forms (e.g. session, personal graph, skill information, etc.) to precisely determine the user’s intent and carry context across turns to serve the best answer to fulfill the user’s request. We will present the new Alexa architecture, key components, underlying algorithms and models as well as metrics to track the progress.
Alexa, a 3P developer perspective
Alexa Skills Kit (ASK) and Alexa Voice Service (AVS) are the two free sets of tools to enable developers to leverage the power of Alexa. Join this session to learn about ASK and AVS, what developers are doing with them and their latest features. See also a demo of a skill from the ground up and learn some tips and tricks for developers.
Open-domain question answering at scale
Open-domain question answering is a complex problem space. Customer questions take many different forms covering tens of thousands of relations that connect billions of entities. Getting started can be straightforward, but making something that really works for customers at scale is much harder. In this talk I’ll discuss the different dynamics that are introduced as we scale, and talk about some of the new dimensions which high-scale brings to the problem space we operate in. I’ll share some of the approaches we take to these challenges, and explain how we use diverse, cross-functional teams to succeed.
Deep Learning for Developers
In recent months, Deep Learning has become the hottest topic in the IT industry. However, its arcane jargon and its intimidating equations often discourage software developers, who wrongly think that they’re “not smart enough”. In this session, we’ll explain the basic concepts of Neural Networks and Deep Learning in simple terms, with minimal theory and math. Then, through code-level demos based on Apache MXNet, we’ll demonstrate how to build, train and use models based on different types of networks: multi-layer perceptrons, convolutional neural networks and long short-term memory networks. Finally, we’ll share some optimization tips which will help improve the training speed and the performance of your models.
Building Software at Amazon
In the talk we'll tell you how is software developed, though not just on the technical systems level. We'll start from writing and design processes, including the famed 6-pager, Then we'll move through development practices, implementation, and systems for code storage, build, and deployment.
Finally, we'll talk about what Operational Excellence means, and how are failures handled, again going back to writing - Cause of Error documents this time.
Large-scale Machine Learning: Deep, Distributed and Multi-Dimensional
Modern machine learning involves deep neural network architectures which yields state-of-art performance on multiple domains such as computer vision, natural language processing and speech recognition. As the data and models scale, it becomes necessary to have multiple processing units (either CPU or GPU cores) for both training and inference. Apache MXNet is an open-source framework developed for distributed deep learning. I will describe the underlying lightweight hierarchical parameter server architecture that results in high efficiency. We obtain state-of-art performance: ~90% efficiency on P2.16x AWS instances with 16 GPUs, and ~88% efficiency on multi-node AWS instances with 256 GPUs. I will also demonstrate how you can quickly start using MXNet by leveraging preconfigured Deep Learning AMIs (Amazon Machine Images) and CloudFormation Templates on AWS.
Pushing the current boundaries of deep learning requires using multiple dimensions and modalities. These can be encoded into tensors, which are natural extensions of matrices. We present new deep learning architectures that preserve the multi-dimensional information in data end-to-end. We show that tensor contractions are an effective replacement for fully connected layers in deep learning architectures. They result in significant space savings (more than 65%) with negligible performance degradation. We also introduce tensor regression in the output layer of the networks and establish further space savings. This is because tensor operations retain the multi-dimensional dependencies in activation tensors while fully connected layers flatten them into vectors and lose this information. Tensor contractions present rich opportunities for hardware optimizations through extended BLAS kernels. We propose a new primitive known as StridedBatchedGEMM in Cublas 8.0 that significantly speeds up tensor contractions, and avoids explicit copy and transpositions. These functionalities are available in the Tensorly package with MXNet backend interface for large-scale efficient learning.
Robotics and the Future of Order Fulfillment
The talk discusses the current application of robotics at Amazon and explores on the future scope of robotics in the e-commerce industry. How has robotics changed the nature of work at Amazon? What are the engineering challenges at the system, software and hardware level for large scale robotic fulfillment centers? How has the rapid pace of innovation in Machine Learning, AI and Cloud Computing affected robotics? What is the future of work and order fulfillment? Join this talk to learn more.
Keynote Amazon: Customer-Driven Innovation in Action
Since the beginning, the design process at Amazon has been “start with the customer and work backwards.” Over the years, this process has inspired and guided the development of many innovative products and services including the Kindle, Alexa, and the Amazon Web Services (AWS). This talk will provide the audience with insights into Amazon’s working-backwards process, with information about leadership principles, narratives, press releases, FAQs, two-pizza teams, and much more.
Q&A with Jeff Barr
Last conference photos
Find the word that is hidden in the audio. Generate MD5 hash of the word. Use the hash as the local part of the following e-mail address: firstname.lastname@example.org. Hint: if the word was ALEXA then e-mail address would be email@example.com. Send your first name, last name and phone number. We will contact you.
these awesome companies support us