The image uploading algorithm

As a part of the iOS development team, I took part in developing a service that allows a user to upload photos and share them with other users of the service in various spheres of human activity with fewer resources needed.

For the images, a user should open a custom camera and take a few pictures. Also, a user can open his library and select some of the images from it. We needed to store a lot of images on the server, and give quick access to them. We have decided to upload images to Amazon S3, get image URLs and work with them. So we have used a described algorithm of taking, uploading and downloading images to get to that goal.

Amazon-Web-Services_Logo

Starting from iOS 9, developers should use the Photos framework. This framework has a considerable advantage over the AssetsLibrary framework, but at the same time, it has disadvantages. The main problem we had with this framework was choosing a photo from the library. When you select an image the first time you need to download the full image from iCloud, which takes some time, but the framework immediately gives a preview image with a small size (200×200).

icloud-10-listas

The Photos framework allows you to start an additional preload in the cache, so we decided to use it to optimize this process. When the user selects images in the array, we preload them, and when a user finishes selecting, photos with the original size are already downloaded from the cloud.

After receiving all the photos from the device/camera/cloud, we get their actual size in megabytes, their count, and make a request to the server to find out whether the user has enough space to upload those files, relative of his plan. If it is not enough space in the bucket, the server returns an error or creates Photo objects with an ID in the database and creation date.

After receiving the Photo objects, we begin to upload images to the server.

Uploading is processing as a background task so a user doesn’t have to wait for uploading and can continue using the app.

linkedin-profile-uploading-photo

When the uploading starts, we display selected photos at the user’s photos list, but with “uploading” status and the status bar in percentage for each photo showed. When uploading is finished, we change status to “uploaded” and a user can work with uploaded files.

As for the other user, who will go into this folder from another device and have an account, he will see Photo objects, but without images, because they are not uploaded yet to the S3. They will have “in progress” status; that means that images are uploading and will be available soon. For MVP we decided to use the REST only, without a socket. So if we have Photo objects with “in progress” status, we are updating them from the server 10 or 30 seconds each (depending on the quality of the Internet), while all objects will have “uploaded” status and we can display images.

As for uploading images to S3, there is some architectural magic also:

After receiving an array of images from the camera, we create an uploading images queue and add it to the global uploading manager to perform.

So, we have a global uploading manager with an array of queues. Each queue has an array of uploading tasks and a folder ID.

Using the folder ID we can easily get access to image uploads from anywhere from the app and display them to the user.

Queues also are not so simple at all. Each queue has an uploading status. If the application is closed and terminated, then all active queues are changing status to “pause”; after loading the application again, paused queues resume and continue uploading images.

We decided to upload up to five images at a time so as to not use a lot of memory and CPU power at one time. For example, if a user starts uploading 20 images they will be formed into four groups with five queued images and they will be uploaded one by one.

After implementation of uploading to S3 big-size images, we have an open question about thumbnail images, because loading big-size images from the server to display a small preview does not make sense. Uploading both large and small images does not make any sense either, because for one image we should create two upload requests to S3.

As a result of deep research, we have decided to use Amazon Lambda, which allows you to use an executable script on Java, Node.js, and Python. When we upload a big-size image, Amazon runs our script and it makes a small copy of an image and saves it into another bucket folder with the name of the static structure, like <imagename> _preview.jpeg. After creating and saving a small copy of our image we run our second script that notifies our server that images have been uploaded and saved. After that, our server changes status of the photo object to uploaded and the user can get images from S3.
Our script was developed on Node.js.

f016d7ba37bc106814449b7e2fd09025

Also, we optimized client-side work with images. We should pay Amazon for each image download, and as we need two images of each photo it can add up to a lot of requests to Amazon S3. We decided to download an image once to the device, and after that store it on the device in the directory. We don’t store images in the DB because one image can be up to eight megabytes and the DB will grow and in the end, it will become too large.

As a result, by a detailed workout of the upload and download architecture, we have minimized the resources used on both sides (on the device and on our server), on DO and Amazon S3.

Technology stack:

Mobile (iOS):

  • iOS SDK frameworks
  • Custom Frameworks (TPKeyboardAvoiding, Fabric, SVProgressHUD, AWSS3, MagicalRecord, AFNetworking, SDWebImage)

Server side:

  • Backend (Ruby on Rails, Redis, Sidekiq, Swagger, Docker)
  • Database(PostgreSQL)
  • Hosting environment (Digital Ocean)
  • File storage(Amazon S3)
  • Mailer(Mandrill)

Front-end:

  • JS – ES6, Angular 1.5, CSS – SASS, HTML, Webpack
Got a project idea?

Master of Code designs, builds, and launches exceptional mobile, web, and conversational experiences.

















    By continuing, you're agreeing to the Master of Code
    Terms of Use and
    Privacy Policy and Google’s
    Terms and
    Privacy Policy

    Also Read

    All articles