visit
So, using bytenode
tool, you can distribute a binary version .jsc
of your JavaScript files. You can also bundle all your .js
files using Browserify, then compile that single file into .jsc
.
Install bytenode globally:[sudo] npm install -g bytenode
To compile your .js
file, run this command:bytenode --compile my-file.js my-file.jsc
Install bytenode
in your project too:npm install --save bytenode
In your code, require bytenode
,const bytenode = require('bytenode');
to register .jsc
extension in Node.js module system. That’s why we installed it locally too.
You can now require my-file.jsc
as a module:const myFile = require('./my-file.jsc');
You can also remove my-file.js
from production build.
And if you want to run my-file.jsc
using bytenode
cli:bytenode --run my-file.jsc
Now you know how to compile .js
files, how to require the compiled version in your code, and how to run .jsc
files from the terminal. Let’s move on to the long story.
Starting from Node.js v5.7.0, the vm
module introduced a property called produceCachedData
in vm.Script
Constructor function, so if you do something like this:
Then, get the bytecode or cachedData
buffer:
This helloBuffer
can be used to create an identical script that will execute the same instructions when it run, by passing it to the vm.Script
Constructor function:
But this will fail, V8 engine will complain about the first argument (that empty string ''
), when it checks whether it is the same code as the one that was used to generate helloBuffer
buffer in the first place. However, this checking process is quite easy, it is the length of the code that does matter. So, this will work:
We give it an empty string with the same length (28) as the original code (console.log("Hello World!");
) . That’s it!
This is interesting, using the cached buffer and the original code length we were able to create an identical script. Both scripts can be run using .runInThisContext();
function. So if you ran them:
you will see ‘Hello World!’ twice.
(Note that if you have used the wrong length, or if you have used another version of Node.js/V8: anotherHelloScript
won’t run, and its property cachedDataRejected
will be set to true
).
Now to our last step, when we defined anotherHelloScript
we used a hard coded value (28) as our code length. How can we change this, so that in the runtime we don’t have to know exactly how long was the original source code?
After some digging in V8 source code, I have found that the header information is defined (in this file code-serializer.h
):
But, Node.js buffer is Uint8Array typed array. This means that each entry from the uint32
array will take four entries in the uint8
buffer. So, the payload length (which is source hash
at index [2]
, which is [8, 9, 10, 11]
bytes in Node buffer) will be:
It will be some thing like this: <Buffer 1c 00 00 00>
, which is Little Endian, so it reads: 0x0000001c
. That is our code length (28 in decimal).
firstByte + (secodeByte * 256) + (thirdByte * 256**2) + (forthByte * 256**3)
,
As I did in my library, check it to see the full recipe.
Alternatively, we could use [buf.readIntLE()](//nodejs.org/api/buffer.html#buffer_buf_readintle_offset_bytelength)
function, which does exactly what we want:
Once you have read the length of the original code (that was used to generate the cachedData
buffer), you can now create your script:
Finally, does this technique have an impact on performance? Well, in recent versions of v8 (and Node.js), the performance is almost the same. Using octance benchmark I did not find any difference in performance. I know that Google deprecated octance (because browsers and JS engines were cheating), but the results in our situation are significant, because we are comparing the same code on the same JS engine. So, the final answer is: Bytenode does NOT have a negative impact on performance.
Check my , where you can find complete working examples. I have added an (which has no source code protection at all) and for (which has a similar tool nwjc
, but it works only with browser-side code). I will add more examples (and tests) soon, hopefully.