There are so many good talks happening at the same time that it’s impossible to not miss out on good sessions. But imagine I had a time-turner necklace and could actually “attend” 3 (maybe 5) sessions happening at the same time. Taking into account my current personal interests and tastes, here’s how my day would look:
11:00 a.m.
- SparkNet: Training deep networks in Apache Spark
- The state of Spark and where it is going in 2016
- Distributed stream processing with Apache Kafka (introducing Kafka Streams)
- A year of anomalies: Building shared infrastructure for anomaly detection (Netflix’ platform for anomaly detection)
11:50 a.m.
- Augmenting machine learning with human computation for better personalization
- Attack graphs: Visually exploring 300M alerts per day (large-scale, interactive visualizations using GPUs)
- Fast big data analytics and machine learning using Alluxio and Spark in Baidu
- Uber, your Hadoop has arrived: Powering intelligence for Uber’s real-time marketplace
1:50 p.m.
- Grounding big data: A meta-imperative
- Scala and the JVM as a big data platform: Lessons from Apache Spark
- eBay analysts and governed self-service analysis: Delivering “turn-by-turn” smart suggestions
2:40 p.m.
- How to make analytic operations look more like DevOps: Lessons learned moving machine-learning algorithms to production environments
- BayesDB: Query the probable implications of your data
- Visualization as data and data as visualization: Building insights in a data-flow world
- Unified namespace and tiered storage in Alluxio
- Faster conclusions using in-memory columnar SQL and machine learnin
4:20 p.m.
- Leveraging Apache Spark to analyze billions of user actions to reveal hidden fraudsters (unsupervised learning)
- Can deep neural networks save your neural network? Artificial intelligence, sensors, and strokes
- Not your father’s database: How to use Apache Spark properly in your big data architecture
- Real-world smart applications with Amazon Machine Learning
5:10 p.m.