The AI Conference in SF: The Future of AI is Now!

Subscribe • Previous Issues

The AI Conference: Bridge the gap between research and practice

I am elated to announce my role as Program Chair for The AI Conference, a unique event designed to bridge the gap between research and practical applications. While I have continued to serve as co-chair and program committee member for numerous AI and Data conferences since taking a hiatus from events, this will be my first time chairing an event since 2019. I am thrilled to join forces with the creators of MLConf, one of my favorite conferences to attend before the pandemic. Our inaugural, unmissable event in San Francisco is scheduled for September 25-26.

We currently find ourselves in an extraordinary era of AI application development. The rate of innovation is genuinely unparalleled. However, it is crucial for teams to strike a delicate balance between rapidly building applications and diligently maintaining a responsible approach to ensure the most positive outcomes.

The AI Conference, is a vendor-neutral event, providing participants with the opportunity to network and engage with prominent researchers and practitioners. Attendees will learn about the latest developer tools, platforms, and data sources while exploring real-world applications and use cases across various domains.


Learn More
 

We are actively seeking speakers with expertise in diverse areas, including the implementation of AI in real-world systems across industries such as healthcare, finance, manufacturing, retail, media, and e-commerce. We are also interested in speakers who are knowledgeable about model development, deployment, and the use of cutting-edge developer tools and platforms for creating AI solutions.

We encourage you to submit a talk and we welcome your suggestions for topics or speakers that would enrich our inaugural conference.Please do not hesitate to reach out to me personally with any questions or recommendations. We look forward to hearing from you.


With the recent surge in popularity of large language models (LLMs), computer vision (CV) can sometimes be overlooked. It’s important to remember that CV was one of the original driving forces behind the resurgence of deep learning, and there are many companies who need applications that rely on CV models.

Spotlight

1. Benchmarking LLMs in the Wild with Elo Ratings. This much-needed new benchmark platform for LLMs uses the Elo rating system and crowdsourced anonymous battles to provide a more accurate and fair assessment of their capabilities. The most recent scores found that GPT-4, the highest-rated proprietary model, generates better answers than Vicuna, the best open-source LLM, 82% of the time.

2. Latency goes subsecond in Apache Spark Structured Streaming.  This post details a new offset management system to reduce the inherent processing latency in Structured Streaming, enabling it to achieve latencies below 250 ms. This improvement significantly meets the Service Level Agreement (SLA) requirements for a large percentage of operational workloads and is a game-changer.

3. Towards Expert-Level Medical Question Answering with LLMsGoogle and DeepMind just unveiled Med-PaLM 2, an AI system that can answer complex medical questions with an accuracy that rivals that of physicians. Human evaluations of Med-PaLM 2’s answers found that they were preferred to those produced by physicians in most categories related to clinical utility. These results are a significant step towards developing AI systems that can provide accurate and reliable medical information. But while this breakthrough points to a future where AI could enhance medical knowledge access and patient care, further research is essential to ensure real-world efficacy, safety, and ethical integrity.

4. Building a Self Hosted Question Answering Service in 20 minutes. This post describes how easy it is to build a retrieval-augmented question answering service using Ray and LangChain. The service first queries a search engine (or a vector database) for results, and then uses an LLM to summarize the results.


Synthetic Data Generation Methods

Data Exchange Podcast

1. Boosting Perception With Synthetic Data.  Omar Maher is Director of Product Marketing at Parallel Domain, a startup focused on enhancing machine perception capabilities through synthetic data. Our discussion explores the increasing adoption of synthetic data and the driving forces behind its use. We examine significant advancements in synthetic data generation, as well as its intersection with Generative AI.

2. Machine Learning for Critical Applications.  Patrick Hall, co-founder of BNH and a visiting faculty member in decision sciences at the George Washington University School of Business, joins Agus Sudjianto, EVP and Head of Corporate Model Risk at Wells Fargo, in a discussion covering various topics presented in their recently published book, Machine Learning for High-Risk Applications.


If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:

%d bloggers like this: