Skip to main content

ChromeVFX Prototype

TLDR: ChromeVFX uses Chrome as a MLT filter

Demo



In the demo, I'm editing a mlt file with WebVfx filter using Shotcut. Shotcut is connected with a Chrome instance via ChromeVFX.

As shown in the video, every frame is direcly rendered in Chrome and reflected in Shotcut. I can also modify the web page directly in Chrome (zooming in/out).

In this setup Shotcut is running in a Linux VM and Chrome is running in a Windows Host. So technically this is already a remote Chrome. Local or headless Chrome should also work in theory.

Background


I'm been using MLT and WebVfx for a while, they together allow me to render various stuff using web technologies.

WebVfx internally uses QtWebKit to render HTML/JS. QtWebKit obviously uses Qt to enable communication between C++ and Javascript. It is quite easy to pass messages/events in between with the Qt language bindings.

However QtWebKit is not the ideal choice. It has been officially removed from Qt 5.5, althougth we can still compile it from source. It uses an old version of WebKit with some bugs and missing HTML5 features. @annulen has been making efforts to bring it back (up-to-date), but it doesn't seem ready yet. Besides, WebKit doesn't include V8.

There have been discussions to port WebVfx to QtWebEngine or Chromium Embedded Framework, which are Qt and C++ bindings of Chromium. In theory both should work, but in practise it's not that easy. I've been playing with both, and didn't make very far. Both frameworks provide "raw access" to Chromium, which make them very powerful. However meanwhile we have to handle stuff like message loops or coordination among various processes. I just got lost in the docs and code.

Recently I learned about Chrome DevTools Protocol, and decided to give a try.

Motivation


So my idea is to use Chrome as a MLT filter. Whenever MLT requests to render a frame, we pass all the information to Chrome, let it render, and pass the rendered image back to MLT. There are various benefits doing so:

  • The plugin code no longer depende on Qt or Chromium. The logic is greatly simplified comparing with the current version of WebVfx. It'll be very easy to maintain and distribute the codebase.
  • Chrome provides very good (if not the best) support of latest web standards and high performance (e.g. v8, hardware accerleration). It is available for most platforms.
  • Having a running Chrome by the side of Shotcut is probably the ideal configuration for debugging WebVfx.

ChromeVFX Overview


As mentioned above, the goal is to connect MLT and Chrome. For every render request from MLT, we need to pass the information to Chrome, let it render, and pass the rendered image back.

Chrome DevTools Protocol and puppeteer


This is a "backdoor protocol" in Chrome. If a Chrome instance is running with remote debugging enabled, a client may control and inspect Chrome remotely. The protocol provides most (if not all) features in the built-in developter tools.

The protocol was designed for debugging, hacking and automated testing etc. The official high-level client is called puppeteer, which is written in Node.js.

Connecting MLT with puppeteer


MLT is written in C++ and puppeteer is written in Node.js. To connect them, I used Boost.Interprocess and wrote a wrapper in Node.js C++ Addon. Boost.Interprocess provides shared memory region and message_queue, which is very easy to use.

Since Chrome DevTools Protocol is based on JSON-RPC over WebSocket. Originally I had also planned to talk to Chrome directly from C++. However after some research I realized that it may not be easy to handle all the DevTools details.

In the end this C++ - Node.js channel was surprisingly very easy to implement.

Important Code Snippets


Render Server

It's a Node.js script that connects to Chrome via puppeteer. It forwards requests from MLT to Chrome and pass screenshot the other way around. The event loop looks like this:

IPC for Node.js

The ipc module mentioned aboove is a wrapper of Boost.Interprocess, which looks like this:

MLT filter

Finally, this is modified EffectsImpl::render from WebVfx, which is greatly simplified now:

Discussions

Chrome Instance

In the demo I'm using a already-running remote Chrome. puppeteer can also starts a new Chrome/Chromium on demand. Headless Chrome/Chromium should also work.

Performance

I had been always worried about performance, there are so many layers between Chrome and MLT, which includes network, SSL, IPC and especially PNG encoding and decoding. However it appears fine in my demo, even with a remote Chrome. Of course this can never be as fast as native CEF integration, but in my case the bottleneck is usually the rendering part, which invovles heavy JS code and 3d rendering. So it is already worth it to move from QtWebkit to Chrome.

WebVfx Interface

In the prototype I have implemented only a minmal webvfx interface. This barely makes the demo work. Most features are actually not available:
  • passing parameters (as defined in mlt xml)
  • passing images (existing frame to be processed by the filter)
  • multiple running filters (currently the render server allows only one client)
All of them should be easy to implement, with a better defined IPC protocol.

On the other hand, the WebVfx protocol relies on a global webvfx JS object, which is used to register render function and to indicate MLT that the page has been initialized.
In my demo I used some hacky code via console.log(). I think it should be easy to inject some JS code/object via Node.js. But I'm not sure whether this can be done before a page is loaded.

In the worst case, we may introduce a webvfx js library that each webvfx page should include.

Conclusion


This protype works much better than I had expected. It demonstrates the possibility and potentials of using Chrome as a MLT plugin. ChromeVFX may actually become a useful MLT plugin with more efforts.

Of course a proper CEF integration may achieve the same thing with better performance, but it may or may not be worth it considering the cost of development and mantenance.

Links:

Comments

FengMi said…
大佬,求微信膜拜.求学习.
Unknown said…
大佬,你是蚌埠二中毕业的吗
Unknown said…
Dear Mr. Wang Lu,

I have recently found some of your coding work,

is there a way to contact you?

Would love to talk with you about a few projects.

Please contact me at: buznetwork AT gmail DOT com

Popular posts from this blog

[转] UTF-8 and Unicode FAQ for Unix/Linux

这几天,这个东西把我搞得很头疼 而且这篇文章好像太大了,blogger自己的发布系统不能发 只好用mail了 //原文 http://www.cl.cam.ac.uk/~mgk25/unicode.html UTF-8 and Unicode FAQ for Unix/Linux by Markus Kuhn This text is a very comprehensive one-stop information resource on how you can use Unicode/UTF-8 on POSIX systems (Linux, Unix). You will find here both introductory information for every user, as well as detailed references for the experienced developer. Unicode has started to replace ASCII, ISO 8859 and EUC at all levels. It enables users to handle not only practically any script and language used on this planet, it also supports a comprehensive set of mathematical and technical symbols to simplify scientific information exchange. With the UTF-8 encoding, Unicode can be used in a convenient and backwards compatible way in environments that were designed entirely around ASCII, like Unix. UTF-8 is the way in which Unicode is used under Unix, Linux, and similar systems. It is now time to make sure that you are well familiar

Determine Perspective Lines With Off-page Vanishing Point

In perspective drawing, a vanishing point represents a group of parallel lines, in other words, a direction. For any point on the paper, if we want a line towards the same direction (in the 3d space), we simply draw a line through it and the vanishing point. But sometimes the vanishing point is too far away, such that it is outside the paper/canvas. In this example, we have a point P and two perspective lines L1 and L2. The vanishing point VP is naturally the intersection of L1 and L2. The task is to draw a line through P and VP, without having VP on the paper. I am aware of a few traditional solutions: 1. Use extra pieces of paper such that we can extend L1 and L2 until we see VP. 2. Draw everything in a smaller scale, such that we can see both P and VP on the paper. Draw the line and scale everything back. 3. Draw a perspective grid using the Brewer Method. #1 and #2 might be quite practical. #3 may not guarantee a solution, unless we can measure distances/p

Moving Items Along Bezier Curves with CSS Animation (Part 2: Time Warp)

This is a follow-up of my earlier article.  I realized that there is another way of achieving the same effect. This article has lots of nice examples and explanations, the basic idea is to make very simple @keyframe rules, usually just a linear movement, then use timing function to distort the time, such that the motion path becomes the desired curve. I'd like to call it the "time warp" hack. Demo See the Pen Interactive cubic Bezier curve + CSS animation by Lu Wang ( @coolwanglu ) on CodePen . How does it work? Recall that a cubic Bezier curve is defined by this formula : \[B(t) = (1-t)^3P_0+3(1-t)^2tP_1+3(1-t)t^2P_2+t^3P_3,\ 0 \le t \le 1.\] In the 2D case, \(B(t)\) has two coordinates, \(x(t)\) and \(y(t)\). Define \(x_i\) to the be x coordinate of \(P_i\), then we have: \[x(t) = (1-t)^3x_0+3(1-t)^2tx_1+3(1-t)t^2x_2+t^3x_3,\ 0 \le t \le 1.\] So, for our animated element, we want to make sure that the x coordiante (i.e. the "left" CSS property) is \(