2019-10-24

ChromeVFX Prototype

TLDR: ChromeVFX uses Chrome as a MLT filter

Demo



In the demo, I'm editing a mlt file with WebVfx filter using Shotcut. Shotcut is connected with a Chrome instance via ChromeVFX.

As shown in the video, every frame is direcly rendered in Chrome and reflected in Shotcut. I can also modify the web page directly in Chrome (zooming in/out).

In this setup Shotcut is running in a Linux VM and Chrome is running in a Windows Host. So technically this is already a remote Chrome. Local or headless Chrome should also work in theory.

Background


I'm been using MLT and WebVfx for a while, they together allow me to render various stuff using web technologies.

WebVfx internally uses QtWebKit to render HTML/JS. QtWebKit obviously uses Qt to enable communication between C++ and Javascript. It is quite easy to pass messages/events in between with the Qt language bindings.

However QtWebKit is not the ideal choice. It has been officially removed from Qt 5.5, althougth we can still compile it from source. It uses an old version of WebKit with some bugs and missing HTML5 features. @annulen has been making efforts to bring it back (up-to-date), but it doesn't seem ready yet. Besides, WebKit doesn't include V8.

There have been discussions to port WebVfx to QtWebEngine or Chromium Embedded Framework, which are Qt and C++ bindings of Chromium. In theory both should work, but in practise it's not that easy. I've been playing with both, and didn't make very far. Both frameworks provide "raw access" to Chromium, which make them very powerful. However meanwhile we have to handle stuff like message loops or coordination among various processes. I just got lost in the docs and code.

Recently I learned about Chrome DevTools Protocol, and decided to give a try.

Motivation


So my idea is to use Chrome as a MLT filter. Whenever MLT requests to render a frame, we pass all the information to Chrome, let it render, and pass the rendered image back to MLT. There are various benefits doing so:

  • The plugin code no longer depende on Qt or Chromium. The logic is greatly simplified comparing with the current version of WebVfx. It'll be very easy to maintain and distribute the codebase.
  • Chrome provides very good (if not the best) support of latest web standards and high performance (e.g. v8, hardware accerleration). It is available for most platforms.
  • Having a running Chrome by the side of Shotcut is probably the ideal configuration for debugging WebVfx.

ChromeVFX Overview


As mentioned above, the goal is to connect MLT and Chrome. For every render request from MLT, we need to pass the information to Chrome, let it render, and pass the rendered image back.

Chrome DevTools Protocol and puppeteer


This is a "backdoor protocol" in Chrome. If a Chrome instance is running with remote debugging enabled, a client may control and inspect Chrome remotely. The protocol provides most (if not all) features in the built-in developter tools.

The protocol was designed for debugging, hacking and automated testing etc. The official high-level client is called puppeteer, which is written in Node.js.

Connecting MLT with puppeteer


MLT is written in C++ and puppeteer is written in Node.js. To connect them, I used Boost.Interprocess and wrote a wrapper in Node.js C++ Addon. Boost.Interprocess provides shared memory region and message_queue, which is very easy to use.

Since Chrome DevTools Protocol is based on JSON-RPC over WebSocket. Originally I had also planned to talk to Chrome directly from C++. However after some research I realized that it may not be easy to handle all the DevTools details.

In the end this C++ - Node.js channel was surprisingly very easy to implement.

Important Code Snippets


Render Server

It's a Node.js script that connects to Chrome via puppeteer. It forwards requests from MLT to Chrome and pass screenshot the other way around. The event loop looks like this:

IPC for Node.js

The ipc module mentioned aboove is a wrapper of Boost.Interprocess, which looks like this:

MLT filter

Finally, this is modified EffectsImpl::render from WebVfx, which is greatly simplified now:

Discussions

Chrome Instance

In the demo I'm using a already-running remote Chrome. puppeteer can also starts a new Chrome/Chromium on demand. Headless Chrome/Chromium should also work.

Performance

I had been always worried about performance, there are so many layers between Chrome and MLT, which includes network, SSL, IPC and especially PNG encoding and decoding. However it appears fine in my demo, even with a remote Chrome. Of course this can never be as fast as native CEF integration, but in my case the bottleneck is usually the rendering part, which invovles heavy JS code and 3d rendering. So it is already worth it to move from QtWebkit to Chrome.

WebVfx Interface

In the prototype I have implemented only a minmal webvfx interface. This barely makes the demo work. Most features are actually not available:
  • passing parameters (as defined in mlt xml)
  • passing images (existing frame to be processed by the filter)
  • multiple running filters (currently the render server allows only one client)
All of them should be easy to implement, with a better defined IPC protocol.

On the other hand, the WebVfx protocol relies on a global webvfx JS object, which is used to register render function and to indicate MLT that the page has been initialized.
In my demo I used some hacky code via console.log(). I think it should be easy to inject some JS code/object via Node.js. But I'm not sure whether this can be done before a page is loaded.

In the worst case, we may introduce a webvfx js library that each webvfx page should include.

Conclusion


This protype works much better than I had expected. It demonstrates the possibility and potentials of using Chrome as a MLT plugin. ChromeVFX may actually become a useful MLT plugin with more efforts.

Of course a proper CEF integration may achieve the same thing with better performance, but it may or may not be worth it considering the cost of development and mantenance.

Links:

2019-10-20

关于电影评分

电影不像游戏,书籍或者其他大多数商品,几乎没有退货这么一说。为了避免踩雷,预判电影好坏就显得非常重要。这里好坏并不是电影艺术水平,社会反响或者制作质量,而是针对单一观看者的喜好。例如,如果我不爱看动作片,那动作电影拍得再好我也不爱看。

不知道是不是因为制作门槛降低了,我感觉现在每年电影太多了,可惜保持不变的是好片的数量而不是比例。似乎游戏产业也有类似现象。

我对电影(包括电视剧)的态度是不看新片,等上映后过几个月或者几年如果还有人记得,还能在网络上提起“这不是XX电影的经典片段吗?”,我才会觉得这电影基本靠谱,再去网上继续调查。在电影上映前和上映时能够作为判断依据的资料不多,预告片大概能算一主要信息,然而我觉得预告片只能大致证明影片的类型,别的不能过多参考。我踩雷的一个例子就是看了一个10分钟左右的预告片,觉得不错去看了一个动作片,然而发现这个片子最精彩的动作部分都在预告片里了。你说预告片骗人了吗,没有。我上当了吗,那肯定上当了。

另外我经常看电影简介,虽然里面是吐槽的为主。很多电影被压成不到15分钟的小故事反而
挺有趣的。极少数的电影,我了解了剧情,了解了结局还去看的,而且看了还很喜欢,比如《カメラを止めるな!》。而大多数的电影通过了简介这么一层过滤也就没了兴趣。

评分则是另一个大致有效的过滤标准,比如国内比较有影响力的豆瓣。“豆瓣评分X.Y”大概是在豆瓣网之外最有效最简短的电影评价。对于评分我一直也觉得参考意义不大,依据是豆瓣的评分是来自于“愿意在豆瓣上评分的人”,而不是来自于所有人(例如在街上随机抽人调查)。“愿意在网站上评分”大致取决于性格以及影片观后感。而我从来都不属于这种人。

不过去年跟朋友讨论之后,我觉得可以做一个量化实验,判断各个电影评分对我能有多大参考意义。简单来说就是我看若干影片,自己打个分,然后跟各个电影评分算相关性。我自己打分分为4档:好看,一般,勉强以及难看,分值分别是2, 1, 0和-1。另外看电影之前我根据网上的信息预测电影的评分,作为比对。

下面是根据十七部电影的统计结果,九部国外八部国内。图表显示了各个评分系统对于我实际观感的Pearson相关系数,数值越高越相关:



可以看出相关性都不咋样,最高的是我自己的预测,豆瓣相比其他的系统要高不少。最有趣的是Metacritic的相关性几乎为0,甚至是负数。

在得出“自己评分比别的系统更靠谱”的结论之前,我又想了想:

- 自己的评分是参考了网上我查到的各种信息,其中就包括了各类评分和评论
- 我挑选的十七部影片大部分都是预测还行的影片,其中只有一部预测-1,一部预测0,其他预测都是1或者2。所以这并不是均匀的抽样,实际推理来看网上的评分已经帮过过滤掉大部分的烂片了。把那两部预测-1和0的影片去掉以后Pearson相关性是这样的:


虽然分值也都不高,但是很多评分都是比我预测要好的,有的虽然是负相关,但也可以拿来用。

所以结论我只能说,在高分区(或者说我初步判断电影可以看)网上评分勉强有点用,但是作用不大。理论上网上的低分可以帮我过滤掉烂片,但是通过我的实验并不能证明。

感觉这在游戏上是类似的,回头也许再做实验验证一下。