GenAI and LLMs: Insights from TikTok and KPMG

Subscribe • Previous Issues

Generative AI: Insights from the Frontlines

A recent survey of large enterprises reveals a significant shift towards in-house application development, driven by the rise of foundation models offering accessible APIs. This move away from reliance on external vendors for AI-driven solutions has major implications for the industry. For instance, companies that once relied on third-party chatbots and custom recommenders can now develop these tools in-house, potentially saving costs and increasing customization. As a result, teams and investors at the forefront of GenAI and LLM innovations must adapt to this changing landscape to remain competitive.

According to the U.S. Census Bureau, AI adoption among U.S. businesses is rapidly growing:

  • Usage rates are expected to nearly double from 3.7% in Fall 2023 to 6.6% by Fall 2024, according to the U.S. Census Bureau.
  • The employment-weighted use rate suggests a broader impact on the workforce.
  • Larger and younger firms are leading the way in AI adoption across various industries and states.
From AI Adoption in the U.S.

To understand the current state of GenAI and LLM usage, I delved into recent U.S. online job postings related to these technologies. The analysis revealed a diverse range of applications, spanning from content creation and business operations to communication, education, and even ethical considerations.

Technology and development applications dominate the landscape, showcasing the immense potential and practical utility of LLMs in streamlining and enhancing a wide array of software development processes and system-related tasks.  Business and marketing applications follow closely behind, demonstrating the growing recognition of GenAI’s potential to revolutionize marketing strategies and customer experiences.

Notably, while customer-facing applications like chatbots and content creation are gaining traction, many companies are initially focusing on internal use cases such as code generation, data analysis, and knowledge management. This suggests that enterprises are still navigating the challenges of deploying GenAI in sensitive external-facing scenarios, opting to first leverage the technology to streamline internal processes and improve efficiency.

A recent analysis of job postings reveals the varying degrees of GenAI adoption across different industries and companies. To further understand the state of GenAI adoption, I examined job postings from two contrasting examples: TikTok and KPMG US. As a consumer-facing social media giant, TikTok is heavily investing in GenAI for content creation, moderation, and personalization. From generating ad creatives to developing intelligent recommendation systems, TikTok is pushing the boundaries of what’s possible with GenAI. The company’s job postings reflect this focus, with numerous positions related to machine learning, natural language processing, and computer vision.

On the other hand, KPMG US, a professional services firm, is taking a more measured approach, focusing on integrating GenAI into its audit, tax, and advisory offerings. KPMG’s use cases revolve around enhancing automation, improving decision-making, and ensuring ethical AI usage. The firm’s job postings emphasize the importance of AI governance, explainable AI, and the integration of GenAI with traditional business processes. These examples showcase the diverse ways in which companies are adopting and leveraging GenAI technology.

Recommendations for Enterprises Adopting GenAI and LLMs

The current landscape of GenAI and LLM adoption reveals several key trends. Enterprises are primarily focusing on in-house development, citing the lack of mature, market-ready solutions and the increased accessibility of foundation models through APIs. Internal use cases, such as code generation and text summarization, are leading the way, as concerns surrounding hallucination, safety, and public perception drive a focus on applications where risks can be more easily managed. Successfully deploying these technologies requires close collaboration between diverse teams, including engineers, product managers, UX designers, and domain experts. Additionally, addressing bias and ensuring responsible AI usage is crucial for building trust and mitigating potential risks.

For teams building GenAI and LLM applications, there are several key takeaways to consider. First, focus on solving real problems by identifying specific pain points within your organization or industry that can be addressed with these technologies. Second, prioritize user experience to ensure your applications are user-friendly and accessible to a wide range of users. Third, invest in talent and infrastructure, as building and deploying these technologies requires skilled personnel and robust computing resources. Finally, embrace ethical AI principles by mitigating bias, ensuring fairness, and being transparent about the limitations of your models.


From The AI Index Report.

Data Exchange Podcast

1. Automating Software Upgrades.  Infield.ai is revolutionizing open-source software dependency management by combining automation with expert developers. Their CEO, Steve Pike, describes how their innovative approach ensures companies stay up-to-date with the latest releases, features, and security fixes, thereby saving time and resources.

2. Navigating the Landscape of Large Language Models and Innovation. In this monthly news roundup, Paco Nathan and I discuss recently released large language models, constraint-driven innovation in databases, highlights from Nvidia’s GTC 2024, and the first known AI workload security exploit. The episode covers a range of topics related to advancements and challenges in AI and data management.


Derived from On the Challenges and Opportunities in Generative AI.

Judicial AI: A Legal Framework to Manage AI Risks

Constitutional AI (CAI), pioneered by Anthropic, is an approach to training AI systems that leverages a set of principles, akin to a constitution, to guide the AI’s behavior. This method prioritizes implementation of human value through these established principles, supplemented by minimal examples for fine-tuning prompts. It aims to reduce reliance on extensive human labeling for tasks like ensuring harmlessness.

The term “constitutional” suggests that building and deploying a general AI system requires establishing core principles to guide its development and use, even if those principles remain implicit or unspoken. CAI involves a two-stage process:

  1. A supervised learning phase, where an initial model generates responses, critiques its own responses according to the principles, revises the responses, and is fine-tuned on the revised responses.
  2. 2. A reinforcement learning phase, where the fine-tuned model generates response pairs, evaluates which is better according to the principles, and this AI feedback is used to train a preference model. Reinforcement learning is then performed using the preference model as the reward signal.
Balancing Helpfulness and Harmlessness

By encoding AI training objectives in a set of natural language instructions or principles, CAI aims to make AI decision-making more transparent, understandable, and controllable. This approach reduces the reliance on extensive human feedback, making the supervision of AI models more efficient and cost-effective. CAI also addresses the crucial challenge of balancing helpfulness and harmlessness, encouraging AI to engage and explain its objections to harmful requests. Furthermore, by using principles to guide AI behavior, CAI can help ensure that AI systems are fair, unbiased, and do not perpetuate harmful stereotypes. With its potential to scale supervision, improve transparency, and enable faster iteration, Constitutional AI holds great promise for the development of safer and more aligned AI systems.

Judicial AI

Luminos.AI is pioneering what it calls Judicial AI, a novel approach that both trains and evaluates AI systems using a custom-built “constitution” of principles. This constitution governs the AI’s behavior, offering a more targeted approach compared to the broad principles outlined in the original Constitutional AI paper.  Judicial AI provides a specific framework for operationalizing and implementing laws and rules governing AI, similar to how a constitution’s high-level values are translated into granular legal provisions aligned with existing laws and regulations.

Luminos AI Judges can be used at all phases of the model lifecycle – from training, where each judge can help optimize for the legality of the AI’s outputs, to deployments, where models can use the Judge’s legality scoring to ensure each output aligns with the right laws and principles. 

In Judicial AI, an AI “Judge” evaluates each AI output against specific provisions of the constitution, which can be drafted collaboratively by your team and Luminos.AI. This granular score ensures the AI adheres to each of the established principles.  Luminos.AI provides a starter set of provisions that can be tailored to your specific needs, guaranteeing alignment between the AI system and your organization’s goals and values.

Judicial AI’s provisions encompass both prohibitions and affirmations. Prohibitions prevent the AI from engaging in harmful or unethical actions, such as biased decision-making, specific types of privacy violations, or deceptive practices. Affirmations, on the other hand, encourage desirable qualities like empathy, explainability, respect, and promoting fairness and justice. This two-pronged approach aligns to legal frameworks and fosters the development of AI systems that are not only safe and ethical but also aligned with human values and expectations.

For more information about Luminos.AI, which just came out of stealth this month, reach out to contact@luminos.ai.


If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:

Discover more from Gradient Flow

Subscribe now to keep reading and get access to the full archive.

Continue reading