Gradient Flow #38: Large Language Models, Infinite Laptop, Overhyping AI

Subscribe • Previous Issues

This edition has 455 words which will take you about 3 minutes to read.

“Yesterday I was clever, so I wanted to change the world. Today I am wise, so I am changing myself.”- Rumi

Data Exchange podcast

  • Training and Sharing Large Language Models    Connor Leahy, AI Researcher at Aleph Alpha GmbH, and founding member of EleutherAI,  a collective of researchers and engineers building resources and models for researchers who work on natural language models. We discussed the state of natural language research, the rise of large language models, embeddings, and a new role that Connor and others refer to as “prompt engineer”.
  • Neural Models for Tabular Data    Sercan Arik, Research Scientist at Google Cloud AI, and his collaborators recently published a paper on TabNet, a deep neural network architecture for tabular data. It uses sequential attention to select features, is explainable, and based on tests and actual deployments, TabNet outperforms or is on par with other models on classification and regression problems.

Featured Virtual Event

Our friend Paco Nathan is part of a strong roster of speakers at the upcoming Datanova For Data Scientists. He will moderate a session on the data lakehouse, an emerging data management paradigm:

REGISTER NOW
 

Data & Machine Learning tools and infrastructure

[Image by Armin Forster from Pixabay.]

Funding Updates

Recommendations

Closing Short:

 

 

 

 

 

 

 

If you enjoyed this newsletter please support our work by encouraging your friends and colleagues to subscribe:

%d