visit
Torchvision datasets are collections of popular datasets commonly used in computer vision for developing and testing machine learning models. With torchvision datasets, developers can train and test their machine learning models on a range of tasks, such as image classification, object detection, and segmentation.
To access this dataset, you can download it directly from
import torchvision.datasets as datasets
# Load the training dataset
train_dataset = datasets.MNIST(root='data/', train=True, transform=None, download=True)
# Load the testing dataset
test_dataset = datasets.MNIST(root='data/', train=False, transform=None, download=True)
Code for loading MNIST dataset using PyTorch torchvision package. Retrieved from on 20/3/2023.
This dataset can be downloaded from
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
Code for loading CIFAR-10 dataset using PyTorch torchvision package. Retrieved from on 20/3/2023.
To download the torchvision dataset from Kaggle, please visit the Kaggle
import torchvision.datasets as datasets
import torchvision.transforms as transforms
# Define transform to normalize data
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
])
# Load CIFAR-100 train and test datasets
trainset = datasets.CIFAR100(root='./data', train=True, download=True, transform=transform)
testset = datasets.CIFAR100(root='./data', train=False, download=True, transform=transform)
# Create data loaders for train and test datasets
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=False)
Code for loading CIFAR-100 dataset using PyTorch torchvision package. Retrieved from on 20/3/2023.
To download this torchvision dataset, you have to visit the
import torchvision.datasets as datasets
import torchvision.transforms as transforms
# Set the path to the ImageNet dataset on your machine
data_path = "/path/to/imagenet"
# Create the ImageNet dataset object with custom options
imagenet_train = datasets.ImageNet(
root=data_path,
split='train',
transform=transforms.Compose([
transforms.Resize(256),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
]),
download=False
)
imagenet_val = datasets.ImageNet(
root=data_path,
split='val',
transform=transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
]),
download=False
)
# Print the number of images in the training and validation sets
print("Number of images in the training set:", len(imagenet_train))
print("Number of images in the validation set:", len(imagenet_val))
Code for loading ImageNet dataset using PyTorch torchvision package. Retrieved from on 21/3/2023.
To download this torchvision dataset, please visit the
import torch
from torchvision import datasets, transforms
# Define transformation
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
# Load training dataset
train_dataset = datasets.CocoDetection(root='/path/to/dataset/train2017',
annFile='/path/to/dataset/annotations/instances_train2017.json',
transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True)
# Load validation dataset
val_dataset = datasets.CocoDetection(root='/path/to/dataset/val2017',
annFile='/path/to/dataset/annotations/instances_val2017.json',
transform=transform)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=32, shuffle=False)
Code for loading MS Coco dataset using PyTorch torchvision package. Retrieved from on 21/3/2023.
This torchvision dataset can be downloaded from
import torch
import torchvision
import torchvision.transforms as transforms
# Define transformations
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Load the dataset
trainset = torchvision.datasets.FashionMNIST(root='./data', train=True,
download=True, transform=transform)
testset = torchvision.datasets.FashionMNIST(root='./data', train=False,
download=True, transform=transform)
# Create data loaders
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
Code for loading Fashion-MNIST dataset using PyTorch torchvision package. Retrieved from on 21/3/2023.
To download this torchvision dataset, you can go to
import torchvision
import torch
# Load the train and test sets
train_set = torchvision.datasets.SVHN(root='./data', split='train', download=True, transform=torchvision.transforms.ToTensor())
test_set = torchvision.datasets.SVHN(root='./data', split='test', download=True, transform=torchvision.transforms.ToTensor())
# Create data loaders
train_loader = torch.utils.data.DataLoader(train_set, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=64, shuffle=False)
Code for loading SVHN dataset using PyTorch torchvision package. Retrieved from on 22/3/2023.
To access this dataset, you can download it directly from
import torchvision.datasets as datasets
import torchvision.transforms as transforms
# Define the transformation to apply to the data
transform = transforms.Compose([
transforms.ToTensor(),
# Convert PIL image to PyTorch tensor
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # Normalize the data
])
# Load the STL-10 dataset
train_dataset = datasets.STL10(root='./data', split='train', download=True, transform=transform)
test_dataset = datasets.STL10(root='./data', split='test', download=True, transform=transform)
Code for loading STL-10 dataset using PyTorch torchvision package. Retrieved from on 22/3/2023.
You can download this dataset on
import torchvision.datasets as datasets
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.CenterCrop(178),
transforms.Resize(128),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
celeba_dataset = datasets.CelebA(root='./data', split='train', transform=transform, download=True)
Code for loading CelebA dataset using PyTorch torchvision package. Retrieved from on 22/3/2023.
To access the recent dataset, you can download from the
import torch
import torchvision
from torchvision import transforms
# Define transformations to apply to the images
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# Load the train and validation datasets
train_dataset = torchvision.datasets.VOCDetection(root='./data', year='2007', image_set='train', transform=transform)
val_dataset = torchvision.datasets.VOCDetection(root='./data', year='2007', image_set='val', transform=transform)
# Create data loaders
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=32, shuffle=False)
Code for loading PASCAL VOC dataset using PyTorch torchvision package. Retrieved from on 22/3/2023.
To access this dataset, you can use
import torch
import torchvision
from torchvision import transforms
# Define transformations to apply to the images
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# Load the train and validation datasets
train_dataset = torchvision.datasets.Places365(root='./data', split='train-standard', transform=transform)
val_dataset = torchvision.datasets.Places365(root='./data', split='val', transform=transform)
# Create data loaders
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=32, shuffle=False)
Code for loading Places365 dataset using PyTorch torchvision package. Retrieved from on 22/3/2023.
The lead image of this article was generated via HackerNoon's AI Stable Diffusionmodel using the prompt 'thousands of images organized together in small frames'.
More Dataset Listicles: