
Implementing Transformer decoder for text generation in Keras and TensorFlow Paid Members Public
The recent wave of generative language models is the culmination of years of research starting with the seminal "Attention is All You Need" paper. The paper introduced the Transformer architecture that would later be used as the backbone for numerous language models. These text generation language models are autoregressive, meaning

Text Classification With BERT and KerasNLP Paid Members Public
BERT is a popular Masked Language Model. Some words are hidden from the model and trained to predict them. The model is bidirectional, meaning it has access to the words to the left and right, making it a good choice for tasks such as text classification. Training BERT can quickly

How to Build Large Language Model Applications with PaLM API and LangChain Paid Members Public
You can now use Generative AI Studio on Vertex AI to prompt, tune and deploy Google's foundational models, including PaLM 2, Imagen, Codey, and Chirp. You can easily design and fine-tune your prompt and copy the code required to deploy the solution. Leveraging a foundational model is a no-brainer because

How to Perform Image Augmentation With KerasCV Paid Members Public
Training computer vision models with little data can lead to poor model performance. This problem can be solved by generating new data samples from the existing images. For example, you can create new images by flipping and rotating the existing ones. Generating new image samples from existing ones is known

How to Build LLM Applications With LangChain and Openai Paid Members Public
LangChain is an open-source tool for building large language model (LLM) applications. It supports a variety of open-source and closed models, making it easy to create these applications with one tool. Some of the modules in Langchain include: * Models for supported models and integrations * Prompts for making it easy to

How to Train Stable Diffusion With Keras Paid Members Public
Image generation models are causing a sensation worldwide, particularly the powerful Stable Diffusion technique. With Stable Diffusion, you can generate images with your laptop, which was previously impossible. Here's how diffusion models work in plain English: 1. Generating images involves two processes. Diffusion adds noise gradually to the image until
How to Generate Images with Variational Autoencoders(VAE) (Create VAE from scratch using Keras and TensorFlow) Paid Members Public
An autoencoder takes an input image and creates a low-dimensional representation, i.e., a latent vector. This vector is then used to reconstruct the original image. Regular autoencoders get an image as input and output the same image. However, Variational AutoEncoders (VAE) generate new images with the same distribution as
Distributed training with TensorFlow: How to train Keras models on multiple GPUs Paid Members Public
Training computer vision models requires a lot of time because of the size of the models and image data. Therefore, training these models can take prolonged periods of time, especially when training on a single GPU. You can reduce the training time by distributing the training across several GPUs. This