Neither side is "right" or "wrong", because this is not actually a binary thing. The two sides are judging the level of risk involved and whether or not that level of risk is acceptable, and they are coming to different conclusions.V8 is undeniably intended to be a secure sandbox. Chrome relied on V8 as the only thing isolating web sites from each other for about a decade. A couple years ago, Chrome implemented "strict site isolation", which adds defense in depth by also forcing every "site" into a separate OS process. However, this does not mean that the Chrome team no longer cares about V8 itself being secure. Strict site isolation is a defense-in-depth measure. Google will pay a bug bounty if you break either layer.
However, V8, like any piece of software, has bugs. The possibility of security bugs implies some level of risk. The question is, how much risk is there, and it is acceptable? V8 is complex, and as a result it tends to have more bugs than, say, a virtual machine hypervisor. Hence, it is argued that V8 is more risky. Some believe the level of risk is unacceptable for a server environment.
On the other hand, V8 receives more security research and better fuzzing than any other sandboxing technology. Most bugs in V8 are actually found by Google's own fuzzers. And process isolation is only one kind of defense-in-depth possible here; Cloudflare Workers implements many other types of defense-in-depth that align better with our particular requirements.[0]
My take -- as the tech lead of Cloudflare Workers -- is that people are broadly overestimating the risk because the approach is different. In fact, I personally believe that typical cloud environments which allow you to run arbitrary native-code binaries are much riskier. The reason is, sandboxing native code doesn't just require secure virtual machine software, it also requires bug-free hardware. CPUs are extremely complex, certainly much more complex than V8. Do we really believe they don't have any bugs which allow VM breakouts? What happens if someone finds such a bug, and it's not possible to mitigate without new silicon? Say it turns out some obscure instruction accidentally allows unchecked access to all physical memory -- what then? It would be quite a disaster for the industry.
In contrast, if your platform only accepts non-native code formats like JavaScript and WebAssembly, it's much easier to respond to such bugs by e.g. controlling access to the buggy instruction.
So frankly, my take is the entire industry has accepted a much higher level of risk already, but people don't talk about it much because it's already broadly accepted as the status quo. Cloudflare Workers gets more questions because we've made a different judgment.
With that said, it's absolutely possible for smart people to disagree on all these points, and probably neither side will ever be proven definitively right or wrong.
[0] https://blog.cloudflare.com/mitigating-spectre-and-other-sec...