TensorFlow

Implementing Transformer decoder for text generation in Keras and TensorFlow Paid Members Public
The recent wave of generative language models is the culmination of years of research starting with the seminal "Attention is All You Need" paper. The paper introduced the Transformer architecture that would later be used as the backbone for numerous language models. These text generation language models are autoregressive, meaning

Text Classification With BERT and KerasNLP Paid Members Public
BERT is a popular Masked Language Model. Some words are hidden from the model and trained to predict them. The model is bidirectional, meaning it has access to the words to the left and right, making it a good choice for tasks such as text classification. Training BERT can quickly

How to Perform Image Augmentation With KerasCV Paid Members Public
Training computer vision models with little data can lead to poor model performance. This problem can be solved by generating new data samples from the existing images. For example, you can create new images by flipping and rotating the existing ones. Generating new image samples from existing ones is known

How to Train Stable Diffusion With Keras Paid Members Public
Image generation models are causing a sensation worldwide, particularly the powerful Stable Diffusion technique. With Stable Diffusion, you can generate images with your laptop, which was previously impossible. Here's how diffusion models work in plain English: 1. Generating images involves two processes. Diffusion adds noise gradually to the image until
How to Generate Images with Variational Autoencoders(VAE) (Create VAE from scratch using Keras and TensorFlow) Paid Members Public
An autoencoder takes an input image and creates a low-dimensional representation, i.e., a latent vector. This vector is then used to reconstruct the original image. Regular autoencoders get an image as input and output the same image. However, Variational AutoEncoders (VAE) generate new images with the same distribution as
Distributed training with TensorFlow: How to train Keras models on multiple GPUs Paid Members Public
Training computer vision models requires a lot of time because of the size of the models and image data. Therefore, training these models can take prolonged periods of time, especially when training on a single GPU. You can reduce the training time by distributing the training across several GPUs. This
Create U-Net from scratch (Image segmentation with U-Net with Keras and TensorFlow) Paid Members Public
In the Implementing Fully Convolutional Networks (FCNs) from scratch in Keras and TensorFlow article, you saw how to build an image segmentation model with FCNs. However, due to the model's limitations, it did not perform very well in the segmenting task. In this post, you will see how to improve

Transfer learning guide(With examples for text and images in Keras and PyTorch) Paid Members Public
Training computer vision (CV) or natural language processing (NLP) models can be expensive and requires large datasets. If labeling is done manually, the process will take a longer training time and requires expensive hardware. For instance, the Generative Pre-trained Transformer 2 (GPT-2), a benchmark-setting language model created by Open AI