visit
This is a post that I have been putting off for a while, but I think the time has come to share this with the community. Two years ago I sat down to start a new project, an experiment involving image downscaling and Node.js, and since then it has become my primary open-source project.
I wanted to generate responsive images for my website to offer a better experience. It came to life as a set of Node.js scripts, and over the course of several iterations, evolved into an open-source package released on npm under the name Responsive Image Builder.
It was created out of necessity due to a lack of existing open-source solutions.
Let me be clear, there are a variety of image tools, loaders, and . However, none of them, in my opinion, fulfilled my needs. Furthermore, I was in love with and the by Michael Fogleman (which was difficult to integrate into existing solutions).
This led me to create my own solution to solve my rather unique requirements:
My goal was to glue together existing image libraries into a unified toolset that could be customised to allow processing of images in different ways.
Psst! You can read more about the motivation behind the project .
Today it goes by a different name that better reflects its new functionality (and partly due to a reserved package scope ️🤦♂️): Image Processing Pipeline. The processing "workflow" is now completely customisable and it has also just had a major release that refactored the internals, making it easier to implement adapters, such as the new webpack loader!
The new IPP features a declarative pipeline format. Tell it *how* it should generate your images!
Much to my own surprise, the open-source repository on GitHub has been slowly gaining stars, an issue and even featured in an , despite never sharing or mentioning the project online. I wanted to wait for a truly stable release of IPP before announcing it officially, but it seems that the online community is restless and eager to try new things!
Perhaps this is the right time. Summer is coming to an end and my university studies have resumed. I am, however, still fully committed to maintaining the project for the foreseeable future. It is, however, an enormous undertaking for a single contributor, so don't expect nightly releases.
Maybe try using it on a smaller project and see if it suits your needs. If you feel like contributing, PRs are welcome! Bear in mind that the aim is to keep the core as simple as possible to promote maintainability and not feature float.
IPP is not limited to website development, but may also prove useful for batch image processing or backend image jobs, as it does not require any code to use.
I have been hard at work creating a new documentation website that is accessible for users of all technical backgrounds. Bear in mind that it is still an active work in progress. Until it is completed there is also the option of consulting the , which aims to be simple and human-readable.
The following section is a quick-start guide for the command-line interface. A more complete example is available at the website above.
IPP runs on Node.js and is distributed via npm. They are bundled together and can be obtained from the .
It's recommended to get an LTS release or a less recent version of Node.js to avoid installation problems.
Open up a terminal somewhere and execute the following (without the dollar sign), which will install the CLI globally on your system (other installation options are available):
$ npm install --global @ipp/cli
And that's it! ✨
Once again, if you get a node-gyp related installation error, try an older release of Node.js to avoid having to install Python and a C++ compiler. This is a design limitation, compiled languages are hard and IPP wants to be fast!
Grab some and chuck them into a new folder with a name of your choosing.
Next to that folder, create a configuration file called
.ipprc.yml
. Copy and paste the following into that file (replacing "images" with the name of the folder you created previously):
input: images
output: formats
manifest:
source:
x: hash:12
format:
w: width
p: path
pipeline:
- pipe: resize
options:
resizeOptions:
width: 1280
save: "[source.name]-[hash:8][ext]"
Next, open up a terminal prompt, navigate to the folder containing the configuration file and run IPP:
$ ipp
If everything was set up correctly, IPP will display some helpful runtime information and report a successful operation. There should now be a second folder called formats with a bunch of new images!
The above configuration file takes each source image and resizes it to have a maximum width of 1280 pixels. Images smaller than this threshold will not be resized but passed along. This is where IPP starts to shine! The image is then saved, using IPP's version of template literals to generate the filename.
Additionally, notice the manifest.json file in the formats directory. This contains a JSON summary of all output results. For example, you might find an entry that resembles the following:
{
"f": [
{
"w": 1280,
"p": "red-green-macaw-74cd8540.jpg"
}
],
"s": { "x": "7f5d4e26980c" }
}
The manifest file is generated based on the
manifest
key in the configuration file. The current manifest configuration outputs the hash of the source image, limited to 12 characters, and the width and path of each output format image.
Tip: the source hash is generated from the file contents and will never change, it could be useful as an image-lookup mechanism
IPP is not a blind resize tool but is context-aware. It also aims to make the consumer (such as the browser) aware by providing a list of available images, and letting them pick the best-suited image based on image dimensions, codec, etc…
The manifest file can be further processed or it can be downloaded and cached by the client. There are even better options, such as the that convert image imports into a single manifest entry, like the example above.
A single image transformation is represented by the concept of a pipe. In reality, it's an asynchronous pure function (you can even make your own!).
An image is represented by a binary buffer and a metadata object. They are related pieces of data and therefore stay together (internally referred to as a DataObject). Metadata provides contextual information about the buffer and is slowly "built up" by each pipe.
Pipes may be interconnected using the
then
property to create a pipeline branch (resembling a tree). An array of pipeline branches creates a pipeline.
- pipe: resize
then:
- pipe: compress
save: "[source.name][ext]"
- pipe: convert
...
Every pipe may additionally specify a
save
property, which exports its output from the pipeline. The exported image is called a format and is a snapshot of the DataObject at that point in the pipeline (immutability is key here).
This is the basic architecture of the
@ipp/core
package. Implementations (such as the command-line interface) can decide on how to handle finer details, such as how to handle the save key and store metadata.
Thank you for sticking to the end! There is a lot more that you can read in the .
There are more planned features, such as asynchronous iterator support to improve memory efficiency, optional disk-based caching, more adapters, front-end integrations, …
This is my first article on an open-source project and I'm excited to hear your comments and feedback! 😁 What workflows do you use for your images?
P.S. I'll be absolutely gutted if someone replies with "hey, this already exists, check out…". At least it was a great learning experience, right…?
Also published at