“Your newsletter has great content.”

“Gradient Flow newsletter is a fun read!”

“One of maybe 2-3 email newsletters that I regularly read through.”

“In-depth reports on specialized topics such as machine learning and AI technologies.”

“If you enjoyed O’Reilly’s great data newsletter, you should subscribe to @bigdata’s new Gradient Flow newsletter.”


Maximizing the Potential of Large Language Models

Subscribe • Previous Issues The LLM Three-Pronged Strategy: Tune, Prompt, Reward As language models become increasingly common, it becomes crucial to employ a broad set of strategies and tools in order to fully unlock their potential. Foremost among these strategies is prompt engineering, which involves the careful selection and arrangement of words within a prompt or query in order to guide the…

The Future of Search and How You Can Shape It

Subscribe • Previous Issues Navigating the Future of SearchChatGPT has revived interest in search. It revealed that a blend of artificial intelligence and a prompt-driven interface is exceptionally well-matched for search applications. As a result of ChatGPT, some well-funded search startups are emphasizing their use of large language models (LLM) and launching chatbot-like interfaces. ChatGPT has been receiving a great deal of…

Unveiling the Future of AI: Insights on AI Chips, Knowledge Graphs, and AI Regulations

Subscribe • Previous Issues AI Chips: Past, Present, Future In a recent collaboration with Assaf Araki of Intel Capital, we delve into specialized hardware for Artificial Intelligence. The training of massive AI models has historically been dominated by Nvidia GPUs and Google TPUs. There is a significant challenge facing new entrants to the market: the lack of resources to develop a software…

Revolutionizing Data Science: The Latest Trends in Automation, Experimentation, and Language Model Evaluation

Subscribe • Previous Issues Data Exchange Podcast 1. Evaluating Language Models.  As more general-purpose models are widely used, the need for tools to help developers pick models that fit their needs and understand the models’ limitations increases.  Percy Liang is Associate Professor of Computer Science and Statistics, and Director of the new Center for Research on Foundation Models at Stanford University. 2.…


Something went wrong. Please refresh the page and/or try again.