Jay Taylor's notes

back to listing index

Run Unix processes inside your browser | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: javascript linux browser unix news.ycombinator.com
Clipped on: 2017-04-24

Image (Asset 1/2) alt=
Image (Asset 2/2) alt=
Reminds me of this talk by Gary Bernhardt:


I don't want his prediction happen, unless we have WebAssembly.

I've not watched that in a while! Enjoyed it again, so thanks!

Have we not almost arrived at this dystopian javascript hella-future though?

I mean, with unikernels which run node in ring 0:


And that we can run an emulated linux in browser:


I'm doing my best to ignore it all....

There it is, shoot yourself. https://i.imgur.com/pC6EV0v.png

Looks like yet another example of the https://en.wikipedia.org/wiki/Inner-platform_effect ...when will this abstraction madness end?

Extended JavaScript runtimes for C, C++, Go, and Node.js that support running programs written in these languages as processes in the browser.

Does that mean it's an interpreter? The main page doesn't say much.

Hi - main author here.

Browsix is not an interpreter. Browsix provides a shared kernel and primitives like system calls to _existing_ interpreters and runtimes that target JavaScript.

For example, we extended both the GopherJS and Emscripten compilers and runtimes to talk to our shared kernel, so that processes written in C and Go can run in parallel (on separate Web Workers) and communicate over pipes, sockets and the filesystem in the browser (much like they can in a standard Unix environment).

Thanks, this is super exciting. It really could push LaTeX on the web. In my opinion, this solves the problem I feel similar to every attempt to implement Vim keybindings has, what features of the original to implement (everyone wants something different), slight bugs or limitations which aren't at first apparent.

Are you not easier emulating x86 then trying to reimplement the entire linux syscall table in javascript?

Then you can run anything that works on ia32 unmodified -- gcc, linux, X, firefox.. Whaytever...

Someone already did it: http://bellard.org/jslinux/tech.html

JSLinux is a tremendous feat of engineering.

If you want to run realistic programs in the browser (and not just technology demos), WebAssembly, asm.js, and compilation to JavaScript in general is the way to go. WebAssembly can easily execute within a factor of 2 of optimized GCC binaries, whereas JSLinux is ~80x slower.

Paired with compilers like Emscripten and GopherJS Browsix has the potential to be a relatively fast and lightweight solution to running legacy code in the browser in a way that integrates with existing tools for web UIs.

Do you have a real world use case for this sort of thing? I'm digging the ability to do these kinds of satanic rites, but I'm not really sure that there's a practical use today other than pushing the bounds of what's possible..

I think the most compelling use case is running legacy code in the browser (and in applications like Atom and Visual Studio Code) - like our Latex example.

Another good use case are command line tools like graphviz. Someone wrote a wrapper around an emscriptenized Graphviz that looks great, but Browsix should lower the bar for using great existing tools like these from JavaScript.

Sounds like a lot of unnecessary overhead, running an extra kernel and an extra layer of interpreter, with code/semantics that don't match well to the lower (JS) interpreter. Sure, it's the best (=only) way to run existing binaries, but why not make use of the fact that source code is available for many applications?

Because you need to write an entire kernel and I guess a compiler, and probably modify the applications to work on it instead of just dropping an ubuntu iso that works on any other system into it and using it unmodified?

I'm not sure the overhead is reeally that significant... I mean, we have pretty fast computers these days and I'm not sure even a 30% perf increase is worth all the effort of writing your own kernel and compiler when we're talking about doing something like running native binaries inside a browser which is pretending to be a real kernel...

It's awesome we can do these things today, and I guess that's reason enough to be doing them, but I'm not sure why we wouldn't want to just run any x86 (windows, linux, bsd, whatever) and not have to play catchup all the time..

Can we use asm.js to make qemu work? :}

Thinking about it a bit more...

The linux ABI has an implemention on SmartOS, BSD and even Windows now... This is great, we have a set of syscalls that can be hit on almost any of the major OS's -- it might not be the greatest/fastest thing ever to translate linux syscalls into whatever native, but the bigger picture here is that linux somehow ended up accidently being what things like cloudabi were pushing for.

If the linux syscall table becomes the target, and those apps work everywhere, soon enough we won't need to mess around making things that run on bare metal support a lot of different os's with tons of IFDEF's and so on.. We just target the one ABI and leave the translation to the OS..

If this is an attempt to implement the linux syscall table in browser, then that means we could run these binaries in even more places!

It took I imagine quite a lot of folks quite a lot of time at MS and Joyent to do so though and I thought reading through this that it wasn't binary compat with linux?

tl;dr -- not binary compat with linux, then emulation is probably more useful. compat with linux; we're getting really close to having a standard binary format that works _EVERYWHERE_ and that's pretty fucking cool :}

shrug I'm not really that far thinking most times...

Very interesting, can you tell us more how GopherJS is being used by Browsix?

Yes! GopherJS has been amazing to work with.

The majority of our changes (https://github.com/bpowers/browsix-gopherjs) amount to an alternative implementation of the compiler/natives/syscall package (we started before GopherJS had its own filesystem implementation). This lets us use the stack saving + resuming support GopherJS already has to implement blocking system calls.

If you start the seed of an idea at least three emulators deep when the browser returns to the top level it thinks it originated that idea.

They skipped limbo, where time has no meaning. There you can create a world with only a state machine and infinite tape.

But my browser wouldn't let go of the idea and then it cr....

If I understand right it compiles said languages to browser-compatible JS and provides an environment for them, so no, I don't think it includes an interpreter in itself. Everything is JS running in the browser's JS interpreter.


To quote Bobby's comment above:

Browsix is not an interpreter. Browsix provides a shared kernel and primitives like system calls to _existing_ interpreters and runtimes that target JavaScript. For example, we extended both the GopherJS and Emscripten compilers and runtimes to talk to our shared kernel, so that processes written in C and Go can run in parallel (on separate Web Workers) and communicate over pipes, sockets and the filesystem in the browser (much like they can in a standard Unix environment).

I don't think this is the same concept.

Running Unix processes inside a browser actually is new functionality. You can now also run Unix processes inside a browser on a Windows host, for example.

You can run Unix processes on Windows 7 and the others with MSYS or Cygwin or on Windows 10 using the Microsoft-provided Linux compatibility layer.

Which presents itself as a much higher barrier. Only the smallest fraction of Windows PCs have these layers enabled/correctly configured, for it to run.

Virtually all Windows PCs have a Browser installed and running. Most users know how to open and operate a Browser.

Of course it's kind of redundant for everything to be reimplemented into the browser, things that should be implemented and used on a OS level, but I think it's a testament of the shortcomings of the UX design or even more fundamental paradigms of personal computing.

>Only the smallest fraction of Windows PCs have these layers enabled/correctly configured, for it to run.

I have never had any trouble running either MSYS2 or Cygwin on Windows computers even when the amount of privileges given to my account has been very low.

The website to MSYS2 was blocked by my high school's firewall. I think Cygwin used Sourceforge, which was arbitrarily filtered for badware risk. There were other ways of fixing that, but my point is… it'a good to have alternatives, and the browser ensures good portability.

Mixing cygwin and WSL in one sentence is generally a bad idea. cygwin even says right on their website that it is not a way to run linux or unix programs on Windows. cygwin and WSL serve the complete opposite purpose.

WSL (and flinux) enable unmodified linux binaries to be run on Windows. WSL handles this at the kernel level, while flinux uses a higher level BCT.

cygwin has another purpose: It requires everything to be built from source, linking against one or more native Windows libraries (most notably cygwin1) that emulate POSIX. In short: cygwin's goal isn't to run unix programs whatsoever, it just provides Windows applications with an interface to use emulated POSIX calls - which is awesome, too.

Since WSL supports binaries at the kernel, performance is much greater for features where cygwin would have to emulate a completely different paradigm (like fork).

So while cygwin is pretty useful for a quick and usually dirty porting of unix programs to Windows, and is even frequently used to emulate(!) a unix environment, WSL can run any linux environment of your choice (the standard being ubuntu) natively.

If you think about it, this is no more abstracted than any Java program.

You compile the source (C or Java) to an intermediate representation (JS or bytecode), which is then shipped to the client and executed by a language VM.

Very true but wouldn't you say Java bytecode looks a lot more like it was intended to be interpreted as an intermediate language than Javascript does?

Hence the transition from asm.js to WebAssembly, which is getting pretty close to readiness.

Except, it’s also less debuggable and modifiable by end users.

How is the end user supposed to modify the code, or break the DRM to create a local copy, or port it to another platform once this isn’t supported anymore?

WebAssembly breaks that ability, while Java at least retains it somewhat.

Webassembly is a complier target, not a type of source code; you compile your source to webassembly, you don't write code in it.

On the other hand, the browser devtools all still work on pages that use it. If the owner of the site you're debugging is friendly they'll include source maps so that you see the original source. If they're not then you get a disassembled view of the webassembly code. This is not very different from seeing a prettified version of a minified or obscured Javascript file.

I don't think that it changes the equation that much, except that execution speed will be better.

> If they're not then you get a disassembled view of the webassembly code. This is not very different from seeing a prettified version of a minified or obscured Javascript file.

I’ve spent a few weeks looking at those, comparing them, and manually deobfuscating examples.

Even getting through DRM in most web- or android apps is easier than trying to get through similar example projects in webassembly.

> If the owner of the site you're debugging is friendly they'll include source maps so that you see the original source. If they're not then you get a disassembled view of the webassembly code.

The whole point is that this shouldn’t just be "if the owner is friendly", but a technical or legal requirement.

You would legislate that website operators are not allowed to obfuscate their javascript by removing all the variable names as well? That would either be a dystopian nightmare or utterly useless.

Why not? Why would it be a dystopian nightmare?

I mean, we already have early systems where you pay a tax if you create copies of stuff, which is distributed to creators, and in return you may just rip CDs, etc.

Why would a truly open source world be a dystopian nightmare? Creators still would get paid, innovation would be a lot faster, without sacrificing anything. And it would find a legal solution to what is already done illegally with the remixing culture, and also allow it in software.

I think db48x's point is that you can't force open source, since people can still obfuscate it enough to render the openness moot, unless you literally legislate and enforce a full style guide on every single programmer (hence the dystopia).

Well, in common law, yes. In a civil law system, you just write something like "the preferred representation for development has to be publicly accessible", and you’re done.

In this scenario WebAssembly is analogous to bytecode, which is also less debuggable and modifiable than the Java source

What specifically are you referring to? WebAssembly and JVM bytecode are fairly similar; the JVM is more explicit about some things, like data types (classes), but I'm not sure the difference is that stark.

Have you tried reversing code in both before? Even the most complicated bytecode for the JVM (for example, code extracted from Android apps and thrown through dex2jar) is a lot easier readable than WebAssembly.

I’ve spent quite a while reversing example projects in both formats, and I really hate WebAssembly.

therein 134 days ago [dupe] [dead] [-]

Very true but wouldn't you say Java bytecode looks a lot more like it was intended to be interpreted as an intermediate language than Javascript does?

As much as my initial reaction was along the lines of "oh god", actually I think this could be really useful for exposing legacy (or just unix-tied) tools to less savvy users.

Pragmatic situations basically.

This is how we improve the solutions of previous generations. Everybody knows the recipe for the future - since web = future and web is written in JavaScript, we need to rewrite everything in JS.


Also related to "abstraction inversion".

Interpreters, interpreters all the way down...

This is still really rough. I see lots of "why?" questions in the comments here, so I'll tell you what I immediately thought of -- getting close to a container system on an iPad Pro would be really amazing. For a while I traveled with just an ipad pro for work, and one of the largest pain points was the lack of any real OS for work without ssh-ing somewhere.

That said, my vibe is that by the time this matures enough (and Safari on iOS works well enough with it) that it's performant, there will likely be another solution that's better.

One solution for this problem that is very mature and stable today is not using an iPad Pro for work on the go :)

Seriously though, as a developer I find the Surface Pro far better suited for mobile productivity, I can't imagine getting much done on an iOS device while traveling besides answering email.

Their github may be more informative.


Seems they are using web workers to implement a POSIX environment.

And their shell demo do not even support cd(!).

It doesn't support a lot of very basic things, like variables

    $ TEST=1
    $ echo $TEST
Interestingly, in the example they show:

    $ cat README | while read L; do echo "README: $L"; done
...and yet:

    $ read L
    /usr/bin/read: command not found
    $ while true; do break; done
    /usr/bin/while: command not found
    $ echo "quotes"
Seems like a horribly backwards approach to implementing a posix shell...

This is apparently the source code for it:


It doesn't do much, and certainly does not qualify as being unmodified existing code (which is what this whole system is supposed to be able to run as one of its benefits.)

That was our first shell we used for debugging before we had emscripten working.

The shell currently used in the demo is dash, compiled with emscripten: https://github.com/plasma-umass/browsix/blob/master/src/dash...

Which ends up in the build through our gulpfile: https://github.com/plasma-umass/browsix/blob/master/gulpfile...

Surely one can just compile bash for browsix though?

That example is actually running dash - the Debian Almquist shell.

The weirdness (and reason that cd + setting variables doesn't work) is because whenever you type a command in, it executes:

$ dash -c '$COMMAND'

Rather than having a long-running shell process listing to standard in. We plan to fix this and implement a full TTY subsystem in the next month or so.

Probably not. If you read the source, This thing has no real idea of processes.

It seems they only have these binaries available on their shell demo:

``` $ ls /usr/bin cat cp curl echo exec grep head ld ls mkdir nice node rm rmdir sh sha1sum sort stat tail tee touch wc xargs ```

And their shell demo do not even support cd(!).

That means their claim that "Unmodified C, C++, Go, and Node.js programs run as processes on Web Workers"

...is not true, as otherwise they could've just "compiled" e.g. bash or some other shell to run on their system, along with GNU coreutils or similar.

The shell is dash, compiled from C. We run each command as

dash -c "$COMMAND"

Processes can change their current directory (We support the chdir(2) system call), but the way the shell demo is currently implemented no state is retained between command invocations. We plan to address this shortcoming soon as part of a bigger TTY refactoring.

My guess is, they can, it's just horrendously slow.

It would be seriously cool to compile Chromium and run it inside Firefox, or the other way around :)

You're in luck: you can run Windows 95 in your browser, and then launch IE in that! https://win95.ajf.me/win95.html

Interesting link, but I got a "nested emulation timeout" exception when I tried to run that.


Ad infinitum. Or nauseam.

Just once is sufficient. The reason why you would want to do that is compatibility: now you only have to support webkit OR gecko/servo, not both :)

Are we reaching the limitations of Atwood's Law[1]?

[1]: https://blog.codinghorror.com/the-principle-of-least-power/

The latex demo has blown my mind! How did they do it? I can undesrstand running some shell in the browser, but the whole latex distro? Amazing...

Disclaimer: not being sarcastic at all.

Can someone explain what can be the use case for this?

The use case is right there in the text:

> enabling unmodified programs expecting a Unix-like environment to run directly in the browser

The latex example is probably the best example. There are a handful of latex typesetter web pages, but the "quick and dirty" implementation for all of them are something like:

(1) take the input (2) shell out to run the latex app on a unix box (3) return the output to the user via the browser

Step (2) requires a complete unix environment. As usage scales, so too does the processing power required. Imagine if that app got 10k requests/second: suddenly you have a farm of "rendering servers" and middleware managing availability, request routing, etc.

This allows you to offload step (2) to the user's browser for a broader range of applications, with easier/faster/less rewriting. While the upfront cost is still more complex if you have to port the app to this js platform, there may be long term savings in simplicity/efficiency from the reduced server requirements. e.g., you could reduce the latex typesetting webapp to static files served from S3.

This is exactly what I was thinking, the most advanced LaTeX typesetter is probably mathjax, but it can't and probably won't [1] handle full LaTeX. Just consider this pretty simple LaTeX,

    x &= 2^2
    \intertext{so we can simplify}
      &= 4
With a little bit of fanciness (sourcing the .sty sheets) behind the scenes, this could faithfully reproduce LaTeX.

[1] https://github.com/mathjax/MathJax/issues/736

Edit: formatting

I could see the example of LaTeX in the browser being actually useful, when for example you're on a computer where you don't have permissions to install software.

I expect it's much the same use case as for asm.js and WebAssembly — just a different solution to the same problem. The intention is probably for it to become an easier way to port software to the web, although it seems like a lot more work will have to be put into it for it to be remotely viable for many programs.

Speaking asm.js and WebAssembly... they would be good tools to implement a Unix environment.

We already work with asm.js, and I've gotten some WebAssembly prototypes running under Browsix as well. Browsix is nicely orthogonal to those projects -- WebAssembly lets you compile existing C/C++/Rust code to run in the browser with almost no overhead, and Browsix gives you the environment your program expects (pipes, sockets, a filesystem, subprocesses).

No, I cannot. I'm mystified why anyone would spend their time building this...

Because it's a good demonstration of how the Web is advancing by showing that this kind of thing is possible.

It's also a good project to learn about both the web and POSIX. Not that this kind of project is required when applying to job but it would seriously catch my attention when reviewing a candidate for any web or systems job. It's also top of hacker news so it's an excellent way to get your name out there.

I wouldn't call this an advance. The web does not interact well with your host system. it creates its own silo. and this is just another step in the silo direction. why talk to the host at all? embrace the web! entrust all your files and software to us!

you're emulating a filesystem inside the browser to manage files. while you have highly advanced filesystems sitting outside the browser with direct access to NVMe SSDs, much lower power consumption and all that. But nope, "do it in the browser" nullifies all the advances that native makes.

> Because it's a good demonstration of how the Web is advancing by showing that this kind of thing is possible.

This being possible is a glaring deficiency of the Web, and an example of how it is stalling the state of the art.

Would more possibilities slow down state of the art or speed it up?

There are many possible things you can do with a frying pan, but only a few of them are good ideas. The flexibility of the Web is a generally good thing, but often misused and often for inferior reinventions of the wheel like this. The amount of research time spent making Web browsers into feeble operating systems would have been better spent elsewhere.

It's the JavaScript platform that's advancing, leaving the web behind. A browsix process might happen to use a web page to bootstrap but it can't be part of the web because it doesn't have a URL.

If this worked properly, I'd find it incredibly useful. It'd give me a Posix command-line environment inside the browser, with all the commands running locally, backed by a file system on cloud storage somewhere, which would work anywhere, regardless of what computer I was currently working on.

I've been wanting something like this for ages (and have had a few ideas along those lines myself)...

The description of the technology glosses over all the really difficult bits, however, like how they turn compiled C into runnable Javascript. Emscripten? An interpreter? Gods help them, they're not using Clue, are they? How much overhead is there in executing the compiled code in chunks so that they can simulating blocking RPCs (for system calls)? Do they have a plan for threads? (I don't believe web workers support any kind of shared-memory threading.)


I found a demo here: https://unix.bpowers.net/

Inspecting the code shows lots of Emscripten stuff.

Don't you think the in-browser latex editor (with no backend server required) would be useful for at least some people?

Academic usage is all I can think of. May as well aim for QEMU in JS, then this can all be moot.

edit: found this http://bellard.org/jslinux/tech.html and this http://bellard.org/jslinux/

We can already do this I think: http://bellard.org/jslinux/

Very different approach: emulating a full x86 machine and booting an OS into it, vs recompiling applications to run in the browser and providing kernel APIs. The latter should be a lot more faster and integrate better with other web tech, but of course can't run arbitrary application binaries.

Does this mean we finally can ditch JavaScript and use bash when we code our web apps? (Only joking a little bit, actually curious about the answer.)

Not if you want to manipulate the DOM or use many of the browser APIs, AFAICT.

Ah, but why not expose the DOM on /sysfs and just hack away using sed and awk. What could possibly go wrong.

Hmm, when I first opened this I thought they implemented a POSIX api on top of js, but it does not look like they actually did this. Actually I think it could be really cool if someone did this. Not sure what it would take, maybe a libc and kernel and then recompile gnu tools and a character driver for the terminal, local storage for persistence. I think it could be a neat alternative to spinning up a VM to test something, obviously not useful for running anything real. If it was fast enough though... I could see using it as a dev environment if you were stuck in windows or something. Or just for online coding tutorials, or a way to teach kids "real" computer OSes if they only tablets or something.

Doppio (https://github.com/plasma-umass/doppio) is exactly this: a POSIX layer (for a single process), while Browsix is a Unix layer. Both are JS only. Doppio implements a ton of libc functionality, enough to host a full JVM interpreter (see http://plasma-umass.github.io/doppio-demo/). Doppio contains BrowserFS (available separately as https://github.com/jvilk/BrowserFS), an in-browser file system that abstracts over local storage and lets you mount remote filesystems like Dropbox, Zip files, and more. Archive.org is using it for all of its DOS and Windows games.

$ curl google.com

Error while executing undefined: Uncaught TypeError: Cannot read property 'split' of undefined

It can run C++? Cool. Can you run a web browser instance in it? Then you'd have a web browser, running inside an operating system, running on top of a web browser, running on top of an operating system. Potentially, this operating system is a virtual machine, inside another operating system. But wait! There's more! This operating system is running inside of a browser. It's JavaScript all the way down! That's right we're running inside of a an advanced computer simulation written in JavaScript. Now about that XDS (cross-dimensional scripting) attack..

Can I use this to run Scala Native apps in the browser? Making Scala.js redundant. :P

Perhaps, but you wouldn't actually want to run a Scala program that way.

To implement C semantics, Emscripten uses a single, large JavaScript typed array as the heap. To the JS garbage collector, this heap is a big, opaque blob. Then your Scala Native program has its own garbage collector operating within its heap. The two garbage collectors are unaware of each other. This is just a vague intuition on my part, but it seems to me that an arrangement involving nested garbage-collected heaps is bound to be suboptimal. Scala.js is much better in this regard, since Scala objects are just JS objects, sharing a garbage-collected, compactable heap with the other JS objects.

Asm.js optimized browsers should make the unneeded JS GC have no overhead.

But you are correct that it's a bad idea. Like running Scala.js on Rhino/Nashorn

Scala.js is not only about making your Scala app run in the browser. It's also (and mainly) about letting you develop in Scala for the Web environment, which includes manipulating the DOM and calling any existing JavaScript library. Scala Native run with Browsix (or compiled to asm.js/WebAssembly for that matter) would not give you that.

So no, it would not make Scala.js redundant.

I wonder if it can run emacs...

Emacs must always be the first target, right? :)

The real use case is running the browser in emacs, then that browser can run emacs with evil. That's a super quick way to get vim commands in emacs without any configuration issues.

Can a more experienced/knowledgeable HN user than myself comment on any security implications that arise as a result of this?

I'd assume that it wouldn't be any worse than code written in js. My best guess would be that they're implementing processor instructions in HTML5, then doing a compiler from there.

Which x86 instruction is <MARQUEE>?


H4CK3RM4N 134 days ago [dupe] [dead] [-]

I'd assume that it wouldn't be any worse than code written in js. My best guess would be that they're implementing processor instructions in HTML5, then doing a compiler from there.

Now start a JVM on it.

I know you were kidding, but one of the co-authors of Browsix wrote the Doppio JVM: http://doppiojvm.org/ a full JVM in pure JavaScript. It will soon run under Browsix!

and be sure to run a browser inside the JVM to check webmail.

Throw on Emacs and it's system-in-a-system and you'll have an infinite mirror of horribleness.

Just goes to show the truth of the old adage: those that know Unix are condemned to reimplement it.

Finally I can put a real vim instance on all my textarea fields. :)

Mmmh, I don't need a browser to run these applications though.

Of of the things i always wanted to do, but never had the time to.

Is this like shellinabox or ajaxterm - terminal emulators? If not, could you let us know the differences.

So, how does it actually work? The dedicated section on the page is not very informative.

We use Web Workers as the basis for our processes, and processes perform system calls to the kernel (running on the main browser thread). Our standard system call mechanism is asynchronous and uses the standard postMessage API to/from Web Workers. When executing a system call, the language runtime saves the current call stack, and when the Web Worker gets a response to the system call, it resumes that thread.

So we build these low-level primitives (kernel, system calls, processes) on top of APIs the browser provides, and extend to-JS compilers + runtimes to use these primitives.

Various languages can be compiled to JS, e.g. C/C++ using emscripten. This project implemented POSIX/Linux APIs in JS using browser features, so now you can compile applications that depend on those APIs to JS and run them in the browser. At least that's what I got from skimming the paper.

That still doesn't explain how syscalls are exposed as webworkers without a modified browser.

They aren't real syscalls. From the looks of it this is essentially a mock POSIX-ish "os" for containing webassembly-compiled applications.

What will be interesting is to see if anyone ever gets to implementing X11 emulation.

They don't expose syscalls, they implemented replacements for them. There is no access to anything outside the browser environment. Web Workers simulate processes, file system calls access a virtual file system etc.

technical paper is online: https://arxiv.org/abs/1611.07862

so, enter an invalid command, then press and hold backspace.

kind of hilarious. wonder why that is..?

I'm not sure of the cause, but it's a known issue.

$ cat README Welcome to Browsix!

For more info, please check out: https://github.com/plasma-umass/browsix

Known issues with this shell: - 'cd' is not implemented. - backspacing past '$' produces "interesting" results $

Funny to see this. I once considered x86 emulation plus emulation of a number of Linux system calls in JS as a way to get Go programs running in the browser. In contrast to the submission, this would have allowed umodified programs yo be run, even written in another language than Go.

There's also RISC-V's ISA emulator which lets you boot a GNU/Linux install in your browser.

Can someone knowledgeable about NaCL comment?

    $ curl --help
    Error while executing undefined: Uncaught TypeError: Cannot read property 'split' of undefined


contestabili 134 days ago [flagged] [dead] [-]


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact