
My problem with this approach is that train_on_batch provides 0 support for validation or batch shuffling, so that the batches are not in the same order every epoch. ain_on_batch(x=train_images, y=train_labels, class_weight=None, The first idea I had is to use ain_on_batch() with a double nested for loop like this: #Loading InceptionV3, adding my fully connected layers, compiling model.ĭataset = h5py.File('/home/uzoltan/PycharmProjects/food-101/food-101_299x299.hdf5', 'r')įor i in range(100): #1000 images can fit in the memory easily, this could probably be range(10) too

Also my GPU is GTX 960 with 2 GB of VRAM, and so far it looked like 1.5 GB is available for python when I try to start the training script). (Have 8 GB of it, although effectively only probably 4-5 gigs is usable while running Ubuntu. This hdf5 file has a size of 27.1 GB, so it will not fit into my memory. The labels are onehotencoded for Keras, so valid_labels has a size of 20200*101. The data split is the standard 70% train, 20% validation, 10% test, so for example the valid_img has a size of 20200*299*299*3. I've preprocessed this dataset into a single hdf5 file (I assumed this is beneficial compared to loading images on the go when training) so far, which has the following tables inside: I'm trying to use the pretrained InceptionV3 model to classify the food-101 dataset, which containts food images for 101 categories, 1000 per category.
