That was a sad thing to read, the author is so clueless they don't even know when the reasons they imagine something might have been broken are wrong.Back when UNIX was born the <tab> character was a first class citizen in every computer on the planet, and many languages used it as part of their syntax. Static binaries were invented when Sun and Berkeley co-developed shared libraries and there needed to be binaries that you knew would work before shared libraries were available (during boot before things were all mounted, during recovery, etc) It always amazed me when someone looks at computer systems of the 70's through the lens of "today's" technology and then projects a failure of imagination on the part of those engineers back in the 70's. I pointed out to such a person that the font file for Courier (60 - 75K depending) was more than the entire system memory (32KW or 64KB) that you could boot 2.1BSD in. Such sillyness.
|
|
| |
The most important thing about UNIX - no matter how simplistic (or not) it might appear or how significant (or not) the perceived flaws might seem - is that a move to UNIX back in 70s-80s was liberating with its simplicity and human friendliness for so many of those coming the world of closed-off, proprietary operating systems, walled gardens, development tools and kernel API’s.Writing a mere string out to a file on a non-UNIX was nowhere near as easy as ‘fd = open (“a file”, O_WRONLY); write (fd, p_aString, strlen (p_aString)); close (fd);’ on UNIX. Many systems required either a block-oriented or a fixed-record (with the record structure to be defined first) to be opened, the block or the record to be written out and then the file to be closed. Your record-oriented file has grown very large? Brace yourself for a coffee break after you invoke the “file close” system call on it. Did you process get killed off or just died mid-way through? Well, your file might have been left open and would have to be forcefully closed by your system administrator, if you could find one. Your string was shorter than the block size, and now you want to append another string? Read the entire block in, locate the end of the string, append a new one and write the entire block back. Wash, rinse and repeat. Oh, quite a few systems out there wouldn’t allow one to open a file for reading and writing simultaneously. Flawed make? Try to compile a 100 file project using JCL or DEC’s IND using a few lines of compilation instructions. Good luck if you want to have expandable variables, chances are there wouldn’t be any supported. You want to re-compile a kernel? Forget about it, you have to “generate it” from the vendor supplied object files after answering 500+ configuration related questions and then waiting for a day or two for a new “system image” (no, there were no kernels back then outside UNIX) to get linked. Awkward UNIX shell? Compared to crimes a numbers of vendors out there committed, even JCL was the pinnacle of “CLI” design. No matter how perfect or imperfect some things were in UNIX back then, hordes of people ended up running away screaming from their proprietary systems to flock to UNIX because suddenly they could exchange the source code with their friends and colleagues who could compile it and run within minutes, even if some changes were required. Oh, they could also exchange and run shell scripts someone else wrote etc. In the meantime, life on other planets was difficult.
|
|
| |
I had much the same reaction. I learned Unix on a PDP-11 running sixth edition AT&T. I was also using the University's UNIVAC 1108 and the elegance, simplicity and orthogonality of the Shell vs the EXEC-8 command language was a joy to behold. Sure, there is much about Unix that could be better and I sometimes regret that Plan 9 didn't succeed more. But it's like Kirk said to Saavik, "You have to learn WHY things work on a starship". Unix mostly does things for a reason, even if you don't see it immediately.But, having said that, I really hate how TAB is used in Makefiles. The first screen editor I used was the Rand Editor and it treated TAB as a cursor motion character. It was a handy way to move horizontally on a line by a fixed amount, something not easy to do in, say, vi. This editor converted all tabs to spaces on input and then re-inserted them on output, which mostly worked, but it could mess up some files that were fussy about tabs.
|
|
| |
I think it is this ambiguity of tab that was the source of confusion. Certainly on DEC machines at the time, when you typed 'tab' in and editor that was inserting text it was because you wanted an ASCII TAB character in your text. What was more, the program could just echo the TAB character to the terminal you were using and the terminal would "do the correct thing" for displaying that tab. Back in the 90's when 'model/view' controllers were the rage it was couched as separating presentation from the semantics.The challenge came when there were devices which operated incorrectly when presented with a TAB character or confused command data with actual data. That became the "killer" issue when computers started substituting for typewriters. Because a typist knows that if you hit the tab key the carriage would slide over to the next tab stop that was set on the platen bar, but the paper was "all spaces". When you try to emulate that behavior in a computer now the TAB becomes a semantic input into the editor "move to the next tab stop" rather than "stick a tab in" and "move to the next tab stop" could be implemented by inserting a variable number of spaces. Computer engineers knew it was "stupid" to try an guess what you wanted with respect to tabs so they wrote a program for formatting text called 'troff' (which had similarities to the program on RSX11, TENEX, and TOPS called RUNOFF. It is always interesting to look back though, if you had told a system designer in the 70's that they would have gigabytes of RAM and HD displays in their pocket when their grandchildren came to visit they would have told you that people always over estimate how fast change will happen.
|
|
| |
I wrote my own version of Make for VAX/VMS and MSDOS, back in 1983 or so. I hated the tab mechanism of the Unix make, so I didn't use it, and none of my users (of which there were apparently thousands) ever complained. I got a few "thank you"s for it.TAB was a stupid choice even in the 1970s.
|
|
| |
If you think about Makefile syntax being an earlier member of the same family as Python, the major difference is that tab may have been a more ubiquitous indent convention at the time (but I really don't know).I still prefer to indent C with tabs, and I'm not the only one. It's really not hard to handle tabs well, as a code editor or as a programmer. You can view them as whatever width you like. You can have marked to distinguish from spaces (e.g. vim's "set list"). They've been around for longer than you have so there's no good excuse for forgetting about them.
|
|
| |
One of the nice thing about indenting with actual tab characters is that you separate the notion of 'indentation' from the actual indentation. I set my tab stops to 4 which tightens the code, others set it to 8, still others set it to 2. If the code has tab characters it just works, if the code has spaces there is no way to have your most comfortable indentation level be used.
|
|
| |
Yes, yes, yes. I'm saddened that tabs have essentially 'lost' - they seem to me in every way superior to spaces, but I would never use them in a new project simply because it would be likely to throw off others, and perhaps tooling. Github, for example, does not play nicely.
|
|
| |
The proper way is tabs to the point of indent, and spaces thereafter. Even Github will work if you do that.
|
|
| |
Spaces are superior to tabs in only one (important) way: They're a constant width.
|
|
| |
Bam-bam-bam-bam. Sorry can hear you over all the whitespace I have to type.
|
|
| |
Why the tab in column 1? Yacc was new, Lex was brand new. I hadn't tried either, so I figured this would be a good excuse to learn. After getting myself snarled up with my first stab at Lex, I just did something simple with the pattern newline-tab. It worked, it stayed. And then a few weeks later I had a user population of about a dozen, most of them friends, and I didn't want to screw up my embedded base. The rest, sadly, is history.— Stuart Feldman
|
|
| |
Stuart was also a FORTRAN programmer :-)
|
|
| |
Source: The Art of Unix Programming
|
|
| |
IMO, the main takeaway from the article isn't that "those 70's developers lacked imagination", but that UNIX is far from perfect.The author even states that UNIX was amazing when it came out, but that doesn't mean all its ideas make sense today.
|
|
| |
Chuck I'm really glad you frequent HN. I love reading your comments and your insight is always valuable. -- A fan.
|
|
| |
Not just tab, but Vertical-Tab.
|
|
| |
Copying an .editorconfig in every new project solves any problems with remembering to use tab characters.
|
|
| |
Everyone should, at some point, read The Unix-Haters Handbook. A great deal of it is outdated or simply wrong, but it does have a running theme of a prediction that has largely been borne out: people assuming that all Unix's flaws are actually virtues and that if you don't think so, then you Just Don't Get It.It's not hard to see how this happened: since pretty much all computers that people normally interact with are either running Windows or a Unix-like system, it has set up a dichotomy in people's minds. When the Unix-Haters Handbook was released, there were still other operating systems which could have a plausible claim to being better, but they have all faded away, leaving only these two. And since the "Real Hackers" prefer Unix, Unix and all its decisions must be the right ones. Unix is great, but people need to be more realistic about its shortcomings instead of mindlessly repeating mantras about "The Unix Way" without critical examination.
|
|
| |
I love motorcycles too. I've owned many sophisticated bikes: Ducatis with their strange Desmodromic heads. With the ability to dial an exchange of torque to horsepower at the handlebars. Buells with their fuel-in-frame chassis. My current Suzuki even has 4 sensors in the airbox alone. One that measures the air input pressure. One that measures the oxygen level. One that measures the air temperature. Really amazing performance. It will be in a junkyard within ten years though. I won't be able to find those sensors in a few years.So all that new and advanced technology doesn't really interest me anymore. I'm looking for a 1969 Honda CL350 right now. They're still around and running fine. They're much simpler and much more maintainable. No Engine Computer. No sensors. Everything really easy to understand. I kinda want my OS like that too. With all its warts I can keep it running.
|
|
| |
> It will be in a junkyard within ten years though. I won't be able to find those sensors in a few years.Not true. You will be able to get an aftermarket ECU that can just ignore the sensors and run in open-loop mode. That will be exactly the same as running with carburetors: fixed fuel/air ratio that is almost always wrong. This is also the failure mode for OBDII cars - sensor failures lead to the ECU running in open-loop mode, which lowers MPG and increases emissions, which will eventually foul the catalytic converters. > I'm looking for a 1969 Honda CL350 right now. They're still around and running fine. They're much simpler and much more maintainable. My wife has owned a CB175 and a CB550. Both required tons of work and were maintenance nightmares. They really are piece of shit bikes when it comes to reliability when compared to most Japanese bikes from 1990 onward. The prices old Honda bikes command on the market are completely out of whack with what you get because of strong demand from both the vintage bike enthusiast and hipster demographics. I would not ride one if it was given to me for free.
|
|
| |
Maintenance "headaches" are part of the appeal of old bikes. It's much easier to learn how an internal combustion powered vehicle works on an old Honda bike than a new one, or on a new car.Compared to other bikes of their day these are very simple to maintain and they were designed from the start to be kept running by the average person. It really depends on what you are looking for in a bike. If enjoying turning a wrench on a Saturday makes me a hipster then pass the beard wax.
|
|
| |
There is weekend wrenching and there is dealing with design flaws and poor manufacturing. Problems with the CB175 and CB550 that were not regular maintenance (carburetor/timing/valve/etc/etc) related:* fast cylinder wear (poor materials/manufacturing, engine rebuilds all around) * unreliable electric system ("mostly" fixed on CB550 with Charlie's solid state ignition and rectifier) * Leaking gaskets (design flaw) I know a lot of vintage Honda collectors and a few racers, and also a lot of vintage BMW collectors. BMW motorcycles from the same era do not have these problems.
|
|
| |
What gaskets leaked? Side covers? I haven't had a problem with side cover gaskets but I did have some replacement non-JIS screws back out because they were not torqued properly. Can't speak to cylinder wear, my bike has close to 10k hard miles and doesn't compression test real well but does work fine.
|
|
| |
I used to share your opinion, but hours of searching for no longer-made parts and obscure wrench sizes and other tools that often simply don't exist anymore cooled my enthusiasm somewhat (fixing 1960s camera lenses)
|
|
| |
All the fasteners I have seen are standard metric sizes, what speciality tools are you referring to? The side cover screws can be stubborn but even a Philips #3 with an impact driver ($20.00 at the pawn shop) pops them right off. Conversion kits to standard allen head screws are cheap, on the order of $50.00 for the whole engine.There is a special nut on the oil spinner but that's the only specialty tool I can think of on the bike until you start actually disassembling the whole thing and you don't even have to remove it to do an oil service. I guess the shock/steering head adjuster is a specialty tool? But that was included with the bike so not hard to find either. Parts can be a bit harder but since these things were so popular it's a lot easier than any other bike from 1969. Also the aftermarket is huge if you don't care about staying totally stock.
|
|
| |
did you have fun converting JIS to metric? I learned that one the hard way. Three EZ-Outs later...
|
|
| |
That's what the impact driver is for. I didn't have to convert any bolts to metric, all those are already metric. Only the screws need to be converted.
|
|
| |
Newer than 1990 doesn't mean fuel injected and sensors out the wazoo. I have a 2001 Bandit and it's brilliant, same power and fuel economy as the current model and pure old fashioned air cooled carbie goodness. Nearly 70k kms and the mechanic reckons it'll be good for as much again.
|
|
| |
Interesting comment. Got to say I agree with the direction of it, to some extent - stable and powerful over newfangled and weak / unstable / buggy / done for glory rather than substance.Do you know about the Royal Enfield Bullet [1] from India? It is not at all as technologically sophisticated as the bikes you mention and others, but it is a fantastic bike to ride. They are selling it in the West, too, from some years. Originally from the Enfield company, UK, then was manufactured in India for many decades (maybe starting around WWII), as the same standard model. Then a decade or more back, the new managing director invigorated the company with better quality, newer models, higher engine capacity (cc) models (like 500 cc), etc. - though I would not be surprised that some fans prefer the old one still - maybe me too, except not ridden it enough, I rode a 250 cc Yezdi much more - also a great bike, almost maintenance free, a successor to the classic Ideal Jawa bike from Czechoslovakia, and also made in India for many years. Yezdi was stopped some years ago, last I read, but the Bullet is still going strong and even being exported, a good amount, to the West. [1] https://en.wikipedia.org/wiki/Royal_Enfield_Bullet https://www.google.co.in/search?q=royal+enfield+bullet A Swiss guy, Fritz Egli (IIRC), was/is a fan and modified some of them (Bullets) over there. It was the subject of a magazine article. I first rode a Bullet in my teens. A real thumper.
|
|
| |
Hey, Royal Enfield! My uncle was behind the company that was the sole distributor of Royal Enfield for Australia/NZ for a good decade or so (sold the business a few years ago).
|
|
| |
I have a 1973 CL350 which I am currently getting back on the road. They are fantastic bikes, largely because they are robust, dead simple and Honda made tens of thousands of them. Having said that, they did have some components that have not aged well.The number one flaw on these bikes in my opinion is the stock carburetors. Honda used a constant velocity type carburetor which in theory provides very smooth throttle action and is easier to ride. In reality the vacuum diaphragm is a very delicate part that frequently fails with tiny air holes that leak vacuum, causing a mismatch in the throttle input between the cylinders (twin carb two cylinder, one per cylinder). This is a similar failure mode to your modern Suzuki air pressure sensor failing. The other pain point is the mechanical points in the ignition system. This is an area of constant fiddling with adjustment and new condensers. It's much preferred to simply replace the points system with a "modern" (1980s technology) electronic ignition system. This removes the moving parts and greatly extends the life of a tune-up. Old bikes are super cool and the 350 platform is a fantastic one but even back then bikes had "high tech" parts that did not age well and gaps where better tech had not been invented. The great thing about the 350 platform is that due to its popularity people are still coming up with solutions. In this way a Honda 350 is similar to Unix.
|
|
| |
Occasionally when I run out of analogies for why convoluted systems are not 'flexible', I reach for a half-remembered homage, that I encountered as a teenager, about the Chevy Straight 6.This was not a fancy engine. It was not a particularly powerful engine. Not a single thing on it or about it was exceptional. Because it was easy to work on and with add-ons and modifications it could be coaxed into doing things it wasn't really meant to do. It was just solid and got out of your way. So engineers and hobbyists tinkered and tinkered and got something over 2.5x the original horsepower out of the thing. Unix commands might be solid, but the 'get out of your way' bit is what troubles me. People will lament otherwise good engines that are hard to work on because of a design flaw or the way they're laid out. Unix is falling down here. It's just that everyone else is at least as bad. But a guy can dream.
|
|
| |
The single most disastrous failing of UNIX is the assumption that everyone wants to be a programmer - or if they don't want to be a programmer, they damn well should become a programmer anyway.It's nonsense. Programming as it's done today - which is strongly influenced by UNIX ideas - is the last thing most users want to do. They have absolutely no interest in the concepts, the ideas, the assumptions, the mindset, the technology, or the practical difficulties of writing software. UNIX set human-accessible computing back by decades. It eventually settled into a kind of compromise at Apple, where BSD supplied the plumbing and infrastructure and NeXT/Apple's built a simplified app creation system on top of it, which could then be used to build relatively friendly applications. But it's still not right, because the two things that make UNIX powerful - interoperability and composability - didn't survive. They were implemented in a way that's absolutely incomprehensible to non-programmers. Opaque documentation, zero standardisation for command options, and ridiculous command name choices all make UNIX incredibly user hostile. Meanwhile the underlying OS details, including the file systems and process model - never mind security - fall somewhat short of OS perfection. The real effect of UNIX has been to keep programming professional and to keep user-friendly concepts well away from commercial and academic development. Programming could have been made much more accessible, and there's consistent evidence from BASIC, Hypercard, VBA, the HTML web, and even Delphi (at a push) that the more accessible a development environment is made, the more non-developers will use it to get fun and/or useful things done. UNIX has always worked hard to be the opposite - an impenetrably hostile wall of developer exceptionalism that makes the learning curve for development so brutally steep it might as well be vertical. UNIX people like to talk about commercial walled gardens as if they're the worst possible thing. But UNIX is a walled garden itself, designed - whether consciously or not - to lock out non-professional non-developer users and make sure they don't go poking at things they shouldn't try to understand.
|
|
| |
I agree with a lot of what you wrote. It's true that there is a "keep it for the elite" mindset in the back of many programmers' mind. For years at forums I've seen that while I tried hard to explain basic concepts to newbies, others were happier with the RTFM answer.Still I'm not so sure that UNIX (or programmers) is the source of it. I started with DOS (later Windows) and TP (later Delphi) so please don't think I'm biased here. I have recently bought an Acer convertible for my mother with Windows 10 and so I'm getting a reality check on the sad state of computer usability in 2017. Teaching her to use an Android phone was difficult, but this is not better. IMHO the reason of user-hostility is not some guild mindset, it's just that computer adoption is needed much faster than the time it would take to develop decent GUIs. The RTFM knee-jerk reaction comes later from people with some deep insecurities and not much imagination. Totally agree on that UNIX is also a sort of walled garden. I'd say the same thing about GPL'd ecosystem, in this case for license issues.
|
|
| |
What OS would be analogous to the Honda in your example? OpenBSD? Plan9?Not Linux, surely. That would be more like the Suzuki, but it used to be an older bike so they left the carburetor in there and next year someone will add an electric motor too.
|
|
| |
Any free software OS can meet his requirements as they all are transparent to anyone with the time to learn, just like the older bikes. The problem with modern bikes is that all the advanced tech is proprietary and you have no way of understanding it or tinkering with it.IMO you can take any modern distro and strip it down to something understandable. It just takes some time to learn how to strip it down and how what's left works (and this is ongoing, as things are always changing). I'm not saying it's trivial, but neither is learning how to rebuild a motorcycle.
|
|
| |
Excellent extension to the original analogy.
|
|
| |
Just remove systemd that is looking like an electronic injection that now requires 4Gb of RAM and a full OS to work and screaming for more.
|
|
| |
All systemd related processes (dbus, systemd-*, init, etc) running on my system are using < 25Mb of memory. So that is a bit excessive, even if exaggerating for effect.
|
|
| |
I don't normally get into these, but this is blatantly false, I'm running systemd on multiple older/smaller computers like an OG Raspberry Pi and an old netbook, and I don't even notice its memory consumption.
|
|
| |
We run systemd on 512mb celerons no problem...
|
|
| |
I'm sad that 512mb is now considered a small amount. I've got a couple of systems in my drawer that have <128mb.
|
|
| |
I've used OpenBSD in the past and found it much simpler than Linux. But didn't always support what I needed to do. These days I'm doing less but doing it better.
|
|
| |
> These days I'm doing less but doing it better.On Linux or OpenBSD?
|
|
| |
Pretty sure he means OpenBSD and I'm in the same boat.Also, Erlang and Elixir run out of the box on it, so it's suitable for nearly all of my personal projects.
|
|
|
| |
I have a Kawasaki KLR 650. Kawasaki has been selling pretty much the same bike since 1987. No fuel injection or ABS. They are very inexpensive (new ones can be had for about $6000). There's a running joke in the community that every year the main update on the new model is bold new graphics.Spare parts are easy to find and there are a lot of aftermarket parts. The downside is that the bike is basically a late '80's bike. 40ish horsepower, poor fuel economy, weak brakes, poor handling, etc... But, like lots of people say, it's more fun to drive a slow bike fast than a fast bike slowly. I love it.
|
|
| |
I did similar with cars, though I still have the ECU to deal with. Eventually I'll replace that with a Megasquirt, because I can repair/replace all the components and know how the whole system works.When I thought I wanted a bike, I looked around for a Condor A350.
|
|
| |
The famous "each tool does just one thing well" mantra is also a depressingly overblown myth. Among the current Unix tools, there is a ridiculous amount of overlap.That's why we have both "ls" and "find", even though they do the same thing conceptually. "ps" has column output, but the way it formats, sorts, selects etc. columns is reinvented and not transferable to other tools such as "lsof" and "netstat", not to mention "top", which of course is just a self-updating "ps". Every tool invents filtering, sorting, formatting etc. in their own unique way. The various flags are not translatable to other tools. I've never used Powershell, but it seems to get one thing right that Unix never bothered to. Tools should emit data, or consume data, or display data, but tools should never do all three, because they will invariably reimplement something poorly that has been done elsewhere.
|
|
| |
Powershell did one thing wrong, in that its tools emit objects, not data. In other words, they emit data with behavior (methods). This, in turn, imposes the CLR object and memory model on the whole thing, and makes the pipeline impossible to use between distinct processes.The right way to do this is to pick some reasonable text-based structured interchange format - s-exprs, JSON, whatever. Actually, it wouldn't be a bad thing to have a binary protocol as well, so long as all that complexity is negotiable, and is implemented by the standard library.
|
|
| |
Agreed. One huge benefit to a text format is that you can generate small "ad hoc" snippets (e.g. echo) easily. Otherwise I'd love something like Protocol Buffers here.
|
|
| |
I did not know about this!http://web.mit.edu/~simsong/www/ugh.pdf It even has an anti-foreword by Dennis Ritchie which kinda reminds me of the Metropolitan Police spokesman's blurb on the back of Banksy's book. BTW, it starts off with an anonymous quote that I've never heard, Two of the most famous products of Berkeley are LSD and Unix. Unix is of course from Bell Labs. And anyone who knew anything would have said instead, Two of the most famous products of Berkeley are LSD and BSD which at least would have been funny if still inaccurate. Anyways, it seems like a fun rant of a book which I'd never heard of. The above point about not getting it can be applied to Linux as well. Lessons learned elsewhere are stupid until Linus finally understands them and then they're obvious.
|
|
| |
The whole document is great but the anti-forward is so good I'm going to risk downvotes by reproducing it in its (short) entirety here, emphasis mine:From: dmr@plan9.research.att.com
Date: Tue, 15 Mar 1994 00:38:07 EST
Subject: anti-foreword To the contributers to this book: I have succumbed to the temptation you offered in your preface: I do write you off as envious malcontents and romantic keepers of memories. The systems you remember so fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out to pasture, they are fertilizing it from below. Your judgments are not keen, they are intoxicated by metaphor. In the Preface you suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag. In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and addled by puffiness of the genome. Yet your prison without coherent design continues to imprison you. How can this be, if it has no strong places? The rational prisoner exploits the weak places, creates order from chaos: instead, collectives like the FSF vindicate their jailers by building cells almost com- patible with the existing ones, albeit with more features. The journalist with three undergraduate degrees from MIT, the researcher at Microsoft, and the senior scientist at Apple might volunteer a few words about the regulations of the prisons to which they have been transferred. Your sense of the possible is in no sense pure: sometimes you want the same thing you have, but wish you had done it yourselves; other times you want something different, but can't seem to get people to use it; sometimes one wonders why you just don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice, just a future whose intellectual tone and interaction style is set by Sonic the Hedgehog. You claim to seek progress, but you succeed mainly in whining. Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy. Bon appetit!
|
|
| |
Thank you for posting this here; I absolutely love that anti-foreward, and it was life-changing for me in that it offered shelter to those of us who were defending Unix during a very, very dark time (namely, the mid-1990s). I have also (shamelessly) cribbed dmr's beautiful closing metaphor of a fecal pie and its "undigested nuggets of nutrition" -- it seems to just describe so much that is not entirely devoid of value, but it utterly foul nonetheless.
|
|
| |
How were the mid-90s a dark time for Unix? Every engineers desk at every place I worked and every d/c I deployed to, it was the the only option (I avoided the AS-400, workhorse that it may well have been). Now, which Unix (HPUX, Solaris, AIX, IRIX) caused some angst, depending on the use-case...
|
|
| |
Even for a UNIX hater, that forward was great. Masterful prose.
|
|
| |
It's 'foreword'. And I think some of the rants inside are even better than this.
|
|
| |
From the book: "Did you know that all the standard Sun window applications
(“tools”) are really one massive 3/4 megabyte binary?"Oh for the days when 750kB was considered "massive" for a binary.
|
|
| |
cough cough Busybox cough
|
|
| |
IIRC, the original (1969) version of UNIX was designed to run on a computer with 24 kilobytes of RAM.
|
|
| |
Oh the days without process isolation and memory management.
|
|
| |
The original UNIX ran on a PDP-7, which didn't have bytes. The memory was 8k of 18 bit words.
|
|
|
|
|
| |
From the anti-foreward:> Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy. > Bon appetit! Pretty good.
|
|
| |
> Lessons learned elsewhere are stupid until Linus finally understands them and then they're obvious.I think you're suffering from selection bias.
|
|
|
| |
> since pretty much all computers that people normally interact with are either running Windows or a Unix-like system, it has set up a dichotomy in people's mindsI really wish undergraduate Software Engineering programs included a course that was a survey of operating systems, where you'd write the same program (that did a lot of IPC) on, say, base WinNT (kernel objects with ACLs!); base Darwin (Mach ports!); a unikernel framework like MirageOS; something realtime like QNX; the Java Card platform for smart-cards; and so forth. Maybe even include something novel, like http://genode.org.
|
|
| |
There are interesting parallels between operating systems and political parties here, and this is not a coincidence: both operating systems and political parties are infrastructure, and infrastructure is really hard both to build and to replace once it has become established. So in both cases you see a lot of rationalization about how the existing solutions are perfectly OK even though they are not simply because no one wants to do the heavy lifting (and in both cases it is very heavy lifting) required to actually come up with something better.
|
|
| |
Political parties present an abstract view of how society works that simplifies the complexity of the underlying system. It offers a set of interfaces that allow a large mass of untrusted individuals to influence the whole.
|
|
| |
> flaws are actually virtues and that if you don't think so, then you Just Don't Get It.Hey, that sounds like Go's line. Oh wait ..
|
|
| |
Sounds just like Rust as well.
|
|
| |
Literally have never heard Rustaceans defend poor design choices like that before.
|
|
| |
Well, if you don't get that someone else might see something as a flaw then perhaps you wouldn't understand that you may be defending a bad design choice.I use Rust. There's a bunch of things I do not like about Rust. Its macro language is nigh on unusable. A macro language should be an expedient; it should allow you to get textual things done, committing necessary atrocities along the way knowing that the result must still pass muster with the compiler. It isn't supposed to remind you of a 61A midterm. I like the safety and that is why I use Rust. I don't get adding functional programming to a systems programming language.
|
|
| |
M4 is a standard POSIX utility and it's Turing complete. It's difficult to out-class recursive string macro expansion for sheer simplicity and sheer ability to confuse the heck out of yourself.Alas, while it's a mandatory POSIX utility (AFAIU), some Linux distributions have saw fit to remove it from base installs.
|
|
| |
I never said Rustaceans don't see flaws or that they don't defend bad choices. But I've never seen someone say "if you don't think so, then you Just Don't Get It" or any variant thereof in the Rust community.
|
|
| |
You mean the way they defend some of the numerics in rust?yeah, totally have never seen that...
|
|
| |
What in particular are you referring to? I ask honestly, because this "my way or the highway" attitude has never been the Rust community's way.
|
|
| |
Having configured and used VMS, I feel I can conclusively say that the people in TUHH who preferred it to Unix were out of their goddamn minds.
|
|
| |
To defend VMS, if you have to have an always up with hot backup and multi-machine cluster that you can upgrade without affecting the cluster, OpenVMS is quite nice.As a developer, it was a bit painful, but the whole file versioning was handy sometimes.
|
|
| |
Oh my god yes. That and various other operating systems I can think of that I had the misfortune to develop on. Thankfully they died out long ago.
|
|
| |
There are many early UNIX design decisions that have outlived their shelf life by decades.Probably the biggest one is that UNIX is, at bottom, a terminal-oriented multi-user time sharing system. This maps badly to desktop, mobile, and server systems. The protection model is a mismatch for all those purposes. (Programs have the authority of the user. Not so good today as in the 1970s.) The administration model also matches badly. Vast amounts of superstructure have been built to get around that mismatch. (Hello, containers, virtualization, etc.) Interprocess communication came late to UNIX/Linux, and it's still not a core component. (The one-way pipe mindset is too deeply ingrained in the UNIX world.)
|
|
| |
Hence why UNIX on mobile is a Pyrrhic victory, as iOS, Android, ChromeOS rely on Objective-C, Java and JavaScript runtimes and their respective frameworks, just with good enough support from POSIX, that could be replaced by what is expected from any ANSI C implementation.
|
|
| |
There is no Objective-C "runtime". (Well, there sort of is, but it's just a library, not a heavy-handed thing like you're thinking of). Unlike Android, iOS apps are just normal compiled ARM machine language binaries executing natively.Yes, you have to use their APIs in order to write graphical programs, but the same is true on any OS with a GUI system. It's possible to write iOS apps in pure C if you want to. Sure, that'd be a pain, but it's possible. Less painful and actually decently reasonable would be to write all the GUI-specific stuff in Objective-C and any other logic in pure POSIX-conforming C or C++, since you can mix all those languages freely in a project.
|
|
| |
Isn't that the whole point of POSIX though?
|
|
|
| |
I don't think the terminal-based multi-user time sharing model maps poorly to servers, at the very least. In fact if you're going to pick one model for the general class of servers to run under, that's probably going to be the most versatile. Sure supercomputers and data centers may stand to benefit from a model that ignores all the multi-user features and such, but in architectures where each server is acting at least semi-autonomously (i.e. not under the control of what is essentially some distributed operating system such as Yarn, SLURM, etc.) I think you'd struggle to come up with a better model. This shouldn't be surprising as this is basically the exact use case that UNIX was built for.
|
|
| |
> The protection model is a mismatch for all those purposes. (Programs have the authority of the user. Not so good today as in the 1970s.)I disagree, I think this is still the sweet spot between security and utility. Users have been trained to just click approve on any privilege escalation dialog.
|
|
| |
The capability model is much more flexible. See Combex's desktop or "PowerBox's" for how simple it can be gor users to maintain POLA. Older system doing it was KeyKOS on IBM mainframes. KeyKOS + KeySAFE was strong architecture.
|
|
| |
Is there a description/examples of these anywhere? Combex doesn't provide a lot of info and I couldn't find anything on PowerBox.From the description of Combex (http://www.skyhunter.com/marcs/capabilityIntro/): > Suppose you were running a capability-secure operation system, or that your mail system was written in a capability-secure programming language. In either case, each time an executable program in your email executed, each time it needed a capability, you the user would be asked whether to grant that capability or not. So Melissa, upon starting up, would first find itself required to ask you, "Can I read your address book?" Since you received the message from a trusted friend, perhaps you would say yes - neither Melissa nor anything else can hurt you just by reading the file. But this would be an unusual request from an email message, and should reasonably set you on guard. In reality, users will get sick of being prompted every 30 seconds and learn to automatically approve every request. Capability security works well in theory, but I've never seen an implementation that works well in practice.
|
|
|
| |
> ClaimThat's the keyword there. They don't actually demonstrate a lot of common apps and how the user is prompted. It sounds a lot like windows UAC with a default lock down. They don't even mention have permissions are permanently granted or not.
|
|
| |
Agree, and we're slowly seeing more "capabilities lite" functionality in Unix-like OSes.I think the next step is a capability runtime OS with a kernel personality for Linux for backwards compatibility. Sort of the converse of what we're doing right now.
|
|
| |
This article was written hastily, and I don’t want to further improve it. You’re lucky I wrote it.I feel so privileged to read this random guy's blog, and it's terrific that he eschews inflating his ego so well.
|
|
| |
As an hobbyist blogger, I can perfectly empathise with why the author wrote that. People love to crap all over a blogger who dares to post his thoughts on a private blog, without first subjecting it to PhD-thesis-level scrutiny. This really gets on my nerves for the same reasons that engineers get pissed off when they decide to open source a pet project and suddenly start getting "URGENT ASAP" feature requests from entitled users.The author is taking the time to post his thoughts on a private blog. If you're not happy with the level of rigor, then don't read it, don't share it, and don't believe it. But no, the author has no responsibility to provide you with a comprehensive list of citations and references.
|
|
| |
> "If you're not happy with the level of rigor, then don't read it, don't share it, and don't believe it."What I don't like about that attitude is that it suggests that the blogpost should be excused from criticism of its rigor. But any assertion and opinion by anyone can and should be considered open to criticism. > "But no, the author has no responsibility to provide you with a comprehensive list of citations and references." Of course he is not responsible. But if someone expects or wants people to be convinced of their argument, then prefacing with his rude dismissal does not help. A better way than "This article was written hastily, and I don’t want to further improve it. You’re lucky I wrote it." without the unnecessary rudeness and ego boosting could be more along your lines: "This article was written hastily and shouldn't be subject to PhD-thesis-level scrutiny, so I've posted without any expectation that I will improve it or respond to criticism."
|
|
| |
> A better way than "This article was written hastily, and I don’t want to further improve it. You’re lucky I wrote it." without the unnecessary rudeness and ego boosting could be more along your lines: "This article was written hastily and shouldn't be subject to PhD-thesis-level scrutiny, so I've posted without any expectation that I will improve it or respond to criticism."Sounds to me like you agree with the substance of what the author said, and just happen to disagree with the delivery. Keep in mind that different people have different writing styles, and people often post things like what you quoted in a tongue-in-cheek manner. Some people prefer a formal, humble style of writing. Others like a more humorous style, and others yet enjoy a Steven Colbert faux-braggadocious style. It would be unfortunate if we browbeat anyone who dares to inject some quirky humor into their writing.
|
|
| |
No, he disagrees with the substance. Faux-braggadocio or humorous is fine! But if you earnestly think that's the writer's style after reading the last half of the first paragraph, seriously ponder why then none of the rest of the article matches that perceived tone.The author makes a bunch of assertions that range from misinformed to straight-up wrong. There is 0% Steven Colbert-style self awareness. No one is lucky that it exists. When you engage in personal endeavors, do them for their own merits. And especially if you're doing something that no one and nothing is compelling you to, please at least try.
|
|
| |
I can relate too. I once started a blog post with "if you don't like [it], go jump in a lake."I was envisioning people nit-picking the lack of formality, when it was merely an enthusiastic, from-the-hip post. A day later I realized that wasn't the best way to start things off with, and toned it down. But I get where he's coming from. And I'm grateful he took the time to write it. It was an interesting read.
|
|
| |
I stopped posting code from side-projects I was done with, when it started taking on a maintenance life of its own, and getting angry emails. There was a simplicity of the time when you could put a tarball on an FTP server and post the path to the appropriate Usenix group: "Here's a tarball. Have at it. Or don't."
|
|
| |
I respectfully disagree with that last part. IMO it's your moral obligation as a distributor of information to, to the best of your ability, ensure its' truth and validity.I'm not saying it has to be PhD-thesis-level, but if proven wrong you are morally obligated to update the information or remove it. Otherwise you are spreading false knowledge. This doesn't apply to opinions ofc (the bulk of private blogging). Screw entitled users though, they should fix it themselves :)
|
|
| |
Nothing can really be trusted without judgement, and as long it's not claiming to be an authoritative source about a life and death situation, I wouldn't want to put the bar as high as an obligation.After 20+ years of working in the business and reading ridiculous amounts of articles, and some books, I can quite easily find clear errors or omissions in most everything written about computers and programming, Knuth excluded. If everyone were obliged to update to fix any errors, even only factual, it's likely that much - or even most - of all I've read would never have been posted for me to read in the first place, and I think both the world and I would be poorer for it. Correctness is important, but so is the ability make judgments of validity, to critique, disseminate, inspire and express ones thoughts, even when they turn out to be wrong.
|
|
| |
I'd have much more respect for him if he wrote an entire article about this and explaining his position than prefacing an otherwise unrelated article with his frustrations. I'm not lucky he wrote it. From my perspective he wasted his time writing it, because I don't care enough to read past the first paragraph.
|
|
| |
My favorite is when somebody makes a blog post "A Great Way to Do X in Unix" and then posts a bash script they wrote.You can just start the countdown until all the unix greybeards come out and slams them with why it's so wrong and how to do it with some obscure bash feature.
|
|
| |
What bugged me more was the snippets designed to show how hard it is to write shell scripts. It looked like the author just didn't know enough about their tools:>How to touch all files in foo (and its subfolders)? find foo -print0 | xargs -0 touch
find foo -exec touch {} \;
I agree with the other commenters who think it's fine to write whatever you like on your own blog, I just feel that it went from interesting historical warts to bagging on legacy systems because they're complicated.
|
|
| |
There's such a problem with immaturity in tech. It's frustrating because it creates barriers to learning from others. No self-respecting person is going to want to learn from someone who has a "holier than thou" or "I'm a rockstar" mindset.
|
|
| |
> There's such a problem with immaturity in tech. It's frustrating because it creates barriers to learning from others. No self-respecting person is going to want to learn from someone who has a "holier than thou" or "I'm a rockstar" mindset.You wouldn't consider it to be immature to not do the right thing just because it was communicated in an unpleasant way that a 'self-respecting person' should have disdain for? I'm not talking about this particular article or defending immaturity, - I just don't see how your position is any better; more so it's just as unhumble as the 'rockstar position' since you would apparently be willing to write worse code just to avoid learning from someone that's an asshole.
|
|
| |
Contrary to your beliefs, one can have both self-respect and a sense of humor.
|
|
| |
It's hard to believe this came from a place of humor after reading the otherwise dry article.
|
|
| |
It's your prerogative to follow trends to determine what you find worthy of your time or not, but it's not a good thing to suggest that anyone that tolerates that type of language better than you do or even enjoys it, lacks self respect.A lot of people find that cockiness funny and find the informality welcoming. And many of those tend to find cold formality, fake humility or lack of self-confidence as signs of something being boring, uninteresting or outright creepy. EDIT: Comment was edited, original: It's hard to believe this came from a place of humor after reading the otherwise dry article. The article was otherwise fairly interesting, but that certainly left a sour taste in my mouth. He's lucky I read past the first paragraph, if it wasn't so prominent on HN I probably wouldn't have bothered listening to anything he had to say.
|
|
| |
That's fair. I'll admit that he may be writing to an audience very different than myself.The edit was because I only ended up scanning what I intended on reading and felt the second part of my criticism overly harsh.
|
|
| |
I guarantee you would not find it funny if I called you a cunt. Remember none of the rest of the article even remotely matches the tone of the sentences in question, and the content larges comes from a place of ignorance.
|
|
| |
I want to add this as a comment in all my source files.
|
|
|
|
| |
Isn't this ironically referencing a lot of Unix development?
|
|
| |
it's a joke. Last sentence of the article is> The provocative tone has been used just to attract your attention.
|
|
| |
That's a pity because of that tone I abandoned reading after few sentences and did't go to last sentence.
|
|
| |
For good or bad reasons, you made the right choice: the article was full of crap (bad understandings, lack of historical knowledge and just technical knowledge even about current tools versions, bad references, bias, repeating blindly wrong or outdated rants, etc.).You could also have guessed it when you noticed it was a kid student showing up (and off) with a lengthy critic on the historical evolution of UNIX tools and others OSes and tools that he discovered a couple of years ago, while you possibly were using them and watching them evolve while he was still in his father's bollocks (and some of us (not me) were working on UNIXes while his father was a kid). You know what to expect in this case, especially if you remember having produced the same kind of over-confident and yet uninformed rant in your days :-) (I do :-D )
|
|
| |
...and the world continued to spin.There's irony in people being insulted by arrogant tone and loudly proclaiming they stopped reading something.
|
|
| |
I don't think there's anything ironic about dismissing the views of self-important assholes.It may be that the author is not actually a self-important asshole, but starting an article like that makes it look like he likely is.
|
|
| |
Yes, anything funny (even absurdist humor) is predicated on a kernel of truth. Quite odd, then, that the article contains so little well informed opinion and even less humor.
|
|
| |
yep, that's where I closed the tab
|
|
| |
"We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy."Rob Pike 2004, https://interviews.slashdot.org/story/04/10/18/1153211/rob-p...
|
|
| |
On the other hand, the record player he first heard David Cassidy on will still work in the next house he buys. That's because, once we settle on foundational stuff like electricity delivery, we don't break it every time we have good ideas about it. It's the same with everything, like math or computer hardware: after a period of diverse experiments, the foundations solidify and we build immense structures on top of them. It's a net win, overall, even if we get stuck in local maximums for too long sometimes.In view of that, it's only natural that big segments of software beyond the OS like word processors are also stagnant. We don't actually need a diverse marketplace of competing word processing ideas anymore... the problem is fundamentally solved as far as the public is concerned. It's not as exciting for software developers, but it's totally natural for it to happen.
|
|
| |
>while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy.The problem here is that there aren't a lot of alternatives. You could use Windows, which is like listening only to music by MC Hammer, or you could use a Mac, which is like listening only to music by Duran Duran. Because of software/backwards compatibility concerns, and how dependent everything is on the underlying OS, it's really hard to change anything in the OS, especially the fundamental design. It'd be nice to make a clean-sheet new OS, but good luck getting anyone to adopt it: look at how well Plan9 and BeOS fared.
|
|
| |
> which is like listening only to music by Duran DuranYou say that like it were a bad thing.
|
|
| |
i'm hoping you are being sarcastic, but i do not want to live in a world where i only listen to my favorite band
|
|
| |
It's OK at first but once you get beyond the shiny, it's quite limiting.
|
|
| |
It's why everyone focuses on making something on some cross platform abstraction layer or something similar. In many ways, things like amazon lambda are a 'new os'.
|
|
| |
The thing is UNIX, by definition, is never going to move beyond its original design, meaning POSIX + C.Whereas Mac OS, Windows, iOS, Android, ChromeOS have moved into more productive language runtimes, with rich frameworks, improving safety across OS layers, even if they have a few bumps along the way.
|
|
| |
That's not true. Windows in particular is greatly limited by past design choices, such as filename limitations, the security model, the fact that many programs have to be run as administrator because that's how they were written years ago, etc.There's nothing preventing you from running different language runtimes and such on Linux/Unix systems; people do it all the time. Have you not noticed Mono? It's been around for many years. Plus frameworks like Qt; that certainly wasn't around before the late 90s.
|
|
| |
Windows 10 already sorted out the filename limitations.A few more releases and in around 10 years, Win32 will be drinking beers with Carbon. Mono and Qt don't change the architecture of UNIX and their adoption across UNIX variants isn't a game changer.
|
|
| |
>Windows 10 already sorted out the filename limitations.No, it hasn't. Filenames are still case-insensitive (and in a terrible way, where it seems to remember how they were first typed but that can never be changed), backslashes are still used for path separators instead of escaping characters, and the worst of all is that drive letters are still in use, which is an utterly archaic concept from the days of systems with dual floppy drives. Also, try making a file with a double-quote character in it, or a question mark. I've run into trouble before copying files from a Linux system to a Windows system because of the reserved characters on Windows. >Mono and Qt don't change the architecture of UNIX and their adoption across UNIX variants isn't a game changer. Nothing you've mentioned has changed the architecture of Windows. The fundamental architecture of Windows hasn't changed at all since WinNT 4.0 (or maybe 3.5); it just has an ugly new UI slapped on top and some slightly different administration tools.
|
|
| |
Windows NT (the core OS) suffers none of those problems.The Windows (Win32) environment suffers those limitations. It also suffers 20+ years of strong binary compatibility, broad hardware support, and consistent reliability that systems of similar class (e.g. Linux desktops) can't match. If it makes you feel any better, drive letters are a convenient illusion made possible by the Win32 subsystem; NT has no such concept and mounts file systems into the object hierarchy (NT is fundamentally object oriented - a more modern and flexible design than is provided by UNIX). The fundamental architecture of Windows, the kernel, hasn't changed in ages because it doesn't need to; it is far more sophisticated than UNIX will ever be and far more sophisticated than you will ever need. The fundamental architecture of Win32 hasn't changed since 32-bits was an exciting concept and it won't change because the market has said loud and clear that they want Windows-level compatibility. See Windows RT and The Year of the Linux Desktop for evidence that users aren't clamoring to ditch Win32 in favor of something more pure.
|
|
| |
Complaints of a UNIX refugee on Windows.For us, Windows pathanmes are just fine. As for Windows architecture, maybe you should spend some hours reading Windows Internals book series, BUILD and Channel 9 sessions about MinWin, Drawbridge, Picoprocesses, UWP, User Space Drivers, Secure Kernel,....
|
|
| |
>you should spend some hours reading Windows Internals book seriesI'd love to read the source code myself to see how it works instead
|
|
|
| |
Google "davec apcobj.c"; that'll point you in the direction of some of Dave's brilliant work that dates back to 1989.(Three versions of NT have been leaked; NT 4.0, Windows 2000, an the Windows Research Kit (which is Win2k3) -- they are all trivial to find online (first page of Google results).)
|
|
| |
You can check out ReactOS which works like Windows but is open source.
|
|
| |
> Windows 10 already sorted out the filename limitations.Only for new applications. These written for the old ABIs still trip over after the 240th character, even though the FileSystem supports much more. To put in another words: I belive the inherent unix limitations (process model for terminals, process signalling, lack of structure) are still less limiting than DOS assumptions about the consumer hardware and applications of the 80's.
|
|
| |
You are moving the goal posts from OS limitations to limitations of old applications.Unix applications written using old assumptions (e.g. ?14? characters max for symbol names in libraries, ?16? bit address space, assuming the C library knows of gets) can have problems on modern systems, too.
|
|
| |
What do you mean by unix in this context? Everything save for windows bears a strong kinship with Unix and every OS is more than capable of running additional runtimes and frameworks beyond C.
|
|
| |
UNIX and C are symbiotic, regardless of whatever runs on top, only POSIX and C are common to any UNIX.Windows roots are on VMS, not UNIX. There is hardly anything UNIX related on its architecture, regarding kernel design.
|
|
| |
>only POSIX and C are common to any UNIX.What about something like this: https://www.redox-os.org/ Its written in Rust not C. In fact, according to the github stats, there is no C. >Rust 72.4% >Shell 13.2% >Makefile 12.5% >TeX 1.9%
|
|
| |
It was an OpenVMS derivative wiyh code copied or clean-slated against a modified form of its behavior. However, I heard the networking stack was from BSD.
|
|
| |
The IP stack is. There's even still an etc/ directory buried in the Windows tree to support it.
|
|
| |
Further evidence that Rob Pike is a second-rate engineer and a lousy writer.
|
|
| |
Hm? I mean I don't know how one could think that and spearheaded go, but that paragraph seems reasonable to me.
|
|
| |
But just because an idea is old doesn't mean it is necessarily bad. Some ideas like Unix Philosophy have stood the test of time. The fact that this statement was made in 2004 and that Unix is still going stronger 13 years later is proof.
|
|
| |
Inertia is only proof of inertia.
|
|
| |
Suggesting that Unix has been continuing just by inertia doesn't explain why Apple adopted Unix for macOS or why most newer computers nowadays run a Unix OS.
|
|
| |
I suggested nothing of the sort. I am criticizing your use of "still going strong" as proof of good design. And there are many possible explanations of why Apple adopted Unix. Inertia is certainly one of them! (If you want to write a new operating system, and all your programmers are familiar with unix, and you don't have enough money to start from scratch... you start from unix)
|
|
| |
Intiertia among developers. Also since when is Apple know for their software architectural decision-making lol.
|
|
| |
Apple didn't make that decision; NeXT did.
|
|
| |
The fundamental design of von Neumann architecture is even older and Unix can be treated as just another layer of abstraction on top of it. It cannot really get past its sell-by date.
|
|
| |
Is it more like listening to music made with a Stratocaster or Les Paul? Lots of great ideas have been applied to guitars too, but lost of people still use the old ones.
|
|
| |
He said that because back then he was working on a new OS. This was PR.
|
|
| |
And why do you think he was working on a new OS, instead of improving Unix?
|
|
| |
He said that because back ten he was working on a new OS. This was PR.
|
|
| |
This reminds me of the Stroustrup quote:There are two types of languages, the ones everyone complains about, and the ones nobody uses.
|
|
| |
I think it's very telling that the author consistently refers to directories as "folders".All of UNIX makes perfect sense if you are using UNIX for UNIX. If you're doing other things, like abstracting to "folders" and so on ... I am open minded and can see where it starts to fall apart a bit. But I use UNIX for the sake of UNIX ... I am interested specifically in doing UNIX things. It works great for that.
|
|
| |
Asking not to make a point but because I genuinely don't have any clue: what is the distinction in your mind between a "directory" and a "folder", other than that latter term is more widely used in the Windows community?
|
|
| |
What then are "UNIX things".This could be either a no-true-Scotsman, or a tautology. To solve this, you'd need to specify what UNIX is good at.
|
|
| |
"This could be either a no-true-Scotsman, or a tautology."It's even worse! I am saying that working in terminals, with strings of text and non-binary-format config files ... and all of the tools built around that ... is an end in itself. Every single "broken" example in the OP is something that I find non-remarkable and, in fact, makes perfect sense to me.
|
|
| |
To argue for the OP, consider the case of passwd being parsed on every system call. That is simply sub-optimal. (It also seems exaggerated to me, and feels like a prime candidate for caching).Further, there is an immense value in GUI based systems: discoverability. On a GUI, you can learn how to use a program without ever consulting a manual just by inspecting your screen. This addition is what brought the computer to the masses. Finally, the terminal model of UNIX is just horrible. The hacks-on-top-of-hacks that are needed to turn the equivalent of a line-printer into something like nCurses or tMux are horrible. The current terminal is like this purely because of legacy. If you'd design a system for
"working in terminals, with strings of text and non-binary-format config files" from the bottom up, it would look totally different. Sadly, getting it to work with existing software would be a total nightmare. All that being said, UNIX still has the better terminal (though I hear good things about powershell). Certainly, it is the best system for "working in terminals, with strings of text and non-binary-format config files". Though competition is sparse (windows, and maybe mac, depending on whether you consider it to still be unix or not).
|
|
| |
Discoverability is definitely a feature of any decent post-2000 *nix terminal environment.Man and apropos get you a long way. The near ubiquity of --help as a flag helps too. Well managed program distributions will even tell your shell how to tab-complete what it wants. MOST CRUCIALLY, though, the text of a command line or config snippet can be pasted, emailed, im'ed, blogged, and found with a search engine! Try describing how to navigate through a moderately complex gui in text or by phone... it's a disaster.
|
|
| |
> To argue for the OP, consider the case of passwd being parsed on every system call. That is simply sub-optimal.passwd is not read every system call and anything that is read frequently is almost certainly in the fs cache. I got about 3 assertions into the article before I decided I had enough of that bullshit.
|
|
| |
Note that "read from disk" and "parsed" are two different things.
|
|
| |
This whole article is a bunch of strawman arguments. It points out historical mistakes by unix developers, but those mistakes aren't inherent to or a result of the Unix Philosophy:
(1) Write programs that do one thing and do it well.
(2) Write programs to work together.
(3) Write programs to handle text streams, because that is a universal interface.
|
|
| |
The arguments in the article indeed don't have much to say about Unix Philosophy per se - they're just a list of various fuckups and idiocies Unix accumulated for some reasons or others. As for Unix Philosophy, the point (3) in your summary is something that's a) dumb, and b) back when it was created, they already had better solutions.Passing text streams around is a horrible idea because now each program has to have its own, half-assed shotgun parser and generator, and you have to glue programs together with your own, user-provided, half-assed shotgun parsers, i.e. calls to awk, sed, etc. Think of it this way: if, per Unix Philosophy (points (1) and (2) of your summary), programs are kind of like function calls, and your OS is kind of like the running image, then (3) makes you programming with a dynamic, completely untyped language which forces each function to accept and return a single parameter that's just a string blob. No other data structures allowed. I kind of understand how it is people got used to it and don't see a problem anymore (Stockholm syndrome). What shocked me was learning that back before UNIX they already knew how to do it better, but UNIX just ignored it.
|
|
| |
> "The arguments in the article indeed don't have much to say about Unix Philosophy per se - they're just a list of various fuckups and idiocies Unix accumulated for some reasons or others."Right. The title should have been reflective of that "Various idiocies Unix has accumulated to this day" but since the article mentions Unix Philosophy, my point is that the article should have criticised the philosophy and not the practice. > "Passing text streams around is a horrible idea because now each program has to have its own, half-assed shotgun parser and generator, and you have to glue programs together with your own, user-provided, half-assed shotgun parsers, i.e. calls to awk, sed, etc." But this has actually proved to be very useful as it provided a standard medium of communication between programs that is both human readable and computer understandable. And ahead of its time since it automatically takes advance of multiprocessor systems, without having to rewrite the individual components to be multi-threaded. > "(3) makes you programming with a dynamic, completely untyped language which forces each function to accept and return a single parameter that's just a string blob. No other data structures allowed." That may be a performance downside in some cases, but the benefit of having a standard universally-agreeable input and output format is the time it saves Unix operators who can quickly pipe programs together. That saves more total human time than gained from potential performance benefits.
|
|
| |
> And ahead of its timeIt wasn't ahead of its time. By the time Unix was created, people were already aware of the benefits of structured data. > it automatically takes advance of multiprocessor systems, without having to rewrite the individual components to be multi-threaded. That's orthogonal to the issue. The simple solution to Unix problems would be to put a standard parser for JSON/SEXP/whatever into libc or OS libraries and have people use it for stdin/stdout communication. This can still take advantage of multiprocessor systems and whatnot, with an added benefit of program authors not having to each write their own buggy parser anymore. > but the benefit of having a standard universally-agreeable input and output format is the time it saves Unix operators who can quickly pipe programs together. That saves more total human time than gained from potential performance benefits. I'd say it's exactly the opposite. Unstructured text is not an universally-agreeable format. In fact, it's non-agreeable, since anyone can output anything however they like (and they do), and as a user you're forced to transform data from one program into another via more ad-hoc parsers, usually written in form of sed, awk or Perl invocations. You lose time doing that, each of those parsing steps introduces vulnerabilities, and the whole thing will eventually fall apart anyway because of million reasons that can fuck up the output of Unix commands, including things like your system distribution and your locale settings. As an example of what I'm talking about, imagine that your "ls" invocation would return a list of named rows in some structured format, instead of an ASCII table. E.g. ((:columns :type :permissions :no-links :owner :group :size :modification-time :name)
(:data
(:directory 775 8 temporal temporal 4096 1488506415 ".git")
(:file 664 1 temporal temporal 4 1488506415 ".gitignore")
...
(:file 755 1 temporal temporal 69337136 1488506415 "hju")))
With such a format you could trivially issue commands like: ls | filter ':modification-time < 1 month ago' | cp --to '/home/otheruser/oldfiles/'
find :name LIKE ".git%" | select (:name :permissions) | format-list > git_perms_audit.log
Hell, you could display the usual Unix "ls -la" table for the user trivially too, but you wouldn't have to parse it manually.BTW. This is exactly what PowerShell does (except it sends .NET objects), which is why it's awesome.
|
|
| |
> But this has actually proved to be very useful as it provided a standard medium of communication between programs that is both human readable and computer understandable. And ahead of its time since it automatically takes advance of multiprocessor systems, without having to rewrite the individual components to be multi-threaded.Except it is completely unusable for network applications because the error handling model is broken (exit status? stderr? signals? good luck figuring out which process errored out in a long pipe chain) and it is almost impossible to get the parsing, escaping, interpolation, and command line arguments right. People very quickly discovered that CGI Perl with system/backticks was a very insecure and fragile way to write web applications and moved to the AOLServer model of a single process that loads libraries.
|
|
| |
> (3) makes you programming with a dynamic, completely untyped language which forces each function to accept and return a single parameter that's just a string blob. No other data structures allowed.I think this is great. There's slightly more principled ways to do it, but having to convert everything to one single format at the end of the day keeps you humble. Let's go back to the previous decade's Hacker News: http://wiki.c2.com/?AlternateHardAndSoftLayers
|
|
| |
It's not converting to "one single format", it's converting to "any and all possible formats", because with unstructured text, you're literally throwing away the structure and semantics inherent in the data, instead relying on users to glue things together with ad-hoc parsers.
|
|
| |
Some of my primary beefs with Unix, stated more concisely than the rambling article:1) Text is for humans, and is generally incomprehensible to machines. Encodings, arbitrary config file formats, terminals, etc, are all piles of thoughtless one-off hacks. It's a horrible substrate for compositing software functionality, through either pipes or files. 2) Hierarchical directory structures quickly become insufficient. 3) The filesystem permission model is way too coarse grained. 4) C is a systems/driver/low-level language, not a reasonable user-level application or utility language.
|
|
| |
1. Binary formats are great, until they break. I would take plain strings configuration over Win95 registry any day.Actually, I'd take plain text conf over Win10 registry any day also. How do you transfer your (arbitrary) program settings from one computer to another? I can tell you how on Unix. How would you on windows? 2. / based filesystems are head and shoulders better than Windows. Why should moving a directory from my internal hard drive to a SSD over USB change my location? On Unix I can keep my home directory (or Program Files) on a NFS share, SMB share or SSD harddrive. Can I do the same on Windows? 3. It is, which is why SELinux was invented. But that's to hard, so no one uses it. 4. All major OSs (Both Unix/Linux and Windows) are C or C++ based. But here's something interesting: in the 90s, it was considered "advanced" to have the GUI an inherent part of the OS, rather than being just another program. Windows and Plan9 did that. Yet, it turned out that admining a system without a 1st class command line is a pain, so windows is rolling back with Power shell. Maybe Windows will one day make their Server headless, and where you can do full sysadmining through SSH
|
|
| |
> Actually, I'd take plain text conf over Win10 registry any day also. How do you transfer your (arbitrary) program settings from one computer to another? I can tell you how on Unix. How would you on windows?Your personal registry settings are stored in %USERPROFILE%\ntuser.dat, and global registry trees are various files under %SystemRoot\System32\Config. Actually, in many ways, Windows makes its organization system much easier to find profile data to copy than Unix-based systems. Most programs install configurations either in their install folder (C:\Program Files\<Product Name>) or in per-user data (C:\Users\<user>\Application Data\<Product Name>). In Unix, you might have a mix between dot folders (with some things choosing to use .<name> while others might settle for .config/<name>), as well as having to guess if various config files are under /etc/<product>, /usr/lib/<product>, /usr/share/<product>, or maybe even weird places like somewhere under /var or /opt. > On Unix I can keep my home directory (or Program Files) on a NFS share, SMB share or SSD harddrive. Can I do the same on Windows? Yep, you can remap where your user's home profile is. You can even distinguish between files that should be replicated across network shares and files that should not be replicated in per-user storage (AppData\Roaming versus AppData\Local).
|
|
| |
To 1:Computers are for humans. I like my configurations to be accessible to me, as well as my logs.
|
|
| |
Who gets to decide what "accessible" means? Too many unix programs have their own completely ad-hoc printer and parser for config files, which have different assumptions. Some allow comma-separated entries in values. Some space or tabs are significant. Varying comment characters. Varying support for multi-line entries. Varying support for quoted strings. Varying support for unicode. Some are key/value, some are section based. Arbitrary special punctuation abounds.It's a fundamental, unstandardized mess. From the perspective of just a single program, then fine, it does what it needs to and its individual text formats are human-understandable, and its code is quickly banged together to do what it needs with it. From an ecosystem perspective, it's a complete failure of usability and _interoperability_, which is the entire point of Unix philosophy in the first place! Humans aren't the only ones reading & writing config files: Software needs to do it, too, be it applications or automation scripts. Again, text is a horrible substrate for compositing software functionality.
|
|
| |
Easy: a tool comes with it yo do that or it's a standard, data format you view with existing tool. You even get closer to UNIX philosophy of one app per task that way given the mess ad-hoc text leads to.
|
|
| |
This is just beautiful: "Standard utilities provide the output in the form of a plain text. For each utility, we actually need a parser of its own."(More people would probably have read the Unix Haters Handbook if it was as pithy as this.)
|
|
| |
I wish Symbolics had a better run.The Windows registry/powershell approach has the niceness of passing pieces of data instead of one big blob of text that has to be re-parsed at every step, but with the drawback of verbosity and fussy static typing. Being able to directly pass s-expressions between between programs without the format/parse/typing hokey-pokey of Unix and Windows would be nice.
|
|
| |
> Being able to directly pass s-expressions between between programs without the format/parse/typing hokey-pokey of Unix and Windows would be nice.Genera did not have programs, it is all just function calls. There was a multi-user LMI Lisp machine that ran Unix concurrently, that used streams to communicate between the two Lisp and one Unix processors: http://www.1000bit.it/ad/bro/lmi/LISP-MachinePR_Photo.pdf Formats for inter-process communication are hard and far from a solved problem. Just look at what has come out in the past 10 years: Thrift, Protocol Buffers, Cap'n Proto, FlatBuffers, SBE, probably twice as many others I haven't heard of.
|
|
| |
Symbolics Genera doesn't pass S-expressions (i.e. text) between processes, it passed objects. Everything runs in a single address space so this is cheap and accurate.Also, ZetaLisp and Common Lisp have way more types than those supported by S-expressions. For example, they have real vectors/arrays, structures, and full objects. Don't assume all of the Lisp world uses just the subset of Scheme used in the first couple chapters of SICP.
|
|
| |
Only knowing Genera from a user perspective, I just assumed that process-to-process communication gives preference to the internal representation when it is available.
|
|
| |
The main advantage is of course that the system provides apis for using the registry and all programs use them. But another is that the format does not provide for comments. This means that it is easy to automatically manipulate unlike the unix text files.
|
|
| |
S-expressions are very nice, and have the unequalled property of being simple enough to serve as a syntax for an entire programming language, but as a data type it is at best on the level of JSON. If one were to design a new data format for pipes, I would hope one would aim a little higher than that. Going from plain text lines to JSON or sexps would simply not be worth the infrastructure retooling.
|
|
| |
What lies above JSON or S-expressions, XML? Or some binary format?If I had to guess, I'd say the issue with JSON and the like is how to deduce types (and the limited types available) combined with the issue of special characters in names and strings. XML goes a long way towards fixing that, but at the cost of a lot of extra bloat. A binary format with a nicely defined header might work, but those formats tend to not be so good about inserting stuff. There is something to be said in favor of plain text. If all solutions suck, go for the simplest and most flexible solution.
|
|
| |
Being able to use the same syntax for code and data is no small thing. This aspect alone puts it above JSON. That, and for most LISPS, S-expressions have a broader vocabulary than JSON (symbols, complex numbers, etc.).
|
|
| |
and latency if you're trying to do anything with PS objects over a slow link. (try doing a 'get-childobject' on a SMB share or loading the registry on a host behind a slow connection; i'll wait)
|
|
| |
You're free to take it all and turn it into an OS of your vision.Next thing - bitch about how Earth works, with every region, country and even, gasp, city is different because a few people a long time ago decided "this is the way to go".
|
|
| |
Similarly, our appendices, which are useless to begin with, occasionally decides to become infected, swell, and possibly burst, threatening our lives, just because biology a long time ago decided "this is the way to go."
|
|
| |
This is incorrect. People did indeed think appendices were just useless throwbacks for a long time, but more recently the medical community recognizes their usefulness. They're basically like a first-level bootloader for the GI system, used to store bacteria in case of a severe illness like cholera or dysentery. It also serves some other immune functions. See here:
https://en.wikipedia.org/wiki/Appendix_%28anatomy%29#Functio...
|
|
| |
That common opinion is odd, in hindsight.Natural selection ought to quickly do away with an organ that served no purpose but to occasionally kill one of the luckless organisms that possessed it. It should have been obvious that it was doing something else, or more specifically, that the gene(s) for "having an appendix" were.
|
|
| |
I thought it was weird that the appendix-haters didn't see the connection between appendix removal and increased stomach issues (esp diarhhea). All kinds of people I met with the operation had issues after the "useless" organ was removed.
|
|
| |
> Unix PhilosophyNote that this philosophy covers many concepts. These discussions often mention the modularity and composition rules, but the other parts of the philosophy are also important. See "The Art Of Unix Programming" for a full explanation of the philosophy. http://www.catb.org/esr/writings/taoup/html/ch01s06.html
|
|
| |
Unix as we know it is almost 50 years of accretion - sh/bash is a great example of this. I think the Unix philosophy is still sound and alive, but the movement of technology means not everything that was universal before, is now.
|
|
| |
There's a recurring theme (e.g. [1] among many examples) of comparing the Unix Way to the way of functional programming. Both prefer small things that do one thing and compose well.What is missing in many cases is a concepts guide, explaining the key ideas, how to combine things, and what's possible in various subject areas. For GUI programs, menus / toolbars used to be the concept guide: what they show is what's possible, and they offer context help. This is why a GUI feels friendly. It sucks at composability, though. Current mobile interfaces, unfortunately, tend to lack this. If tiny GUI-oriented programs were easy to compose, had an easy way to save the composed state, and a number of daily-use programs bundled with an OS came in this form, providing example and reference, many people would consider following the suit, I suppose. [1]: http://softwareengineering.stackexchange.com/questions/61814...
|
|
| |
> For GUI programs, menus / toolbars used to be the concept guideThis simple fact seems like the key to getting the masses into computing. For something like 6 years (say 12-18) GUIs were the way I interacted with computers. Need to do something and learn about it? Go and explore the UI until you find the option. If the option has a shortcut printed, you will remember it eventually. Sadly, GUI design is a quite separate discipline from software design. This means much open source software is missing GUIs. Those who write the software aren't always GUI designers. This also creates the mismatch between composing software and composing GUIs. As they are different disciplines, what it means to combine them means different things. A decent stopgap is massive frameworking and standardization on GUI to make it easier for devs to get a GUI.
To get the really good stuff, commercial entities have the best position. They need their stuff to be usable by everyone, and this finances the hiring of GUI people. There is the rare gen of a developer that can also do GUI right, but that only has value in the case of small projects. When projects grow, unless all devs have the GUI knack, you're gonna need some dedicated GUI people. It would be great if we could get more GUI-oriented people into open-source stuff but it seems like they aren't as attracted to open-source as devs are. It might be because devs can be at the ground floor of a project, and GUI, almost by neccesity, comes later.
|
|
| |
> concepts guide"The Art of Unix Programming" http://www.catb.org/esr/writings/taoup/html/ It's not perfect - I'd love to see a guide with more practical examples - but it does do a good job covering the basic philosophy and some of the history of why certain design decisions were made.
|
|
| |
Have you tried the alternatives? They all suffer from various problems. e.g. I use fish but still write bash scripts for production systems - more so fish is perhaps too safe in being too close to POSIX for the improvements to make a great difference (vs the pains of breaking compatibility). Powershell? You might as well write python or ruby at that rate..What I mean to say is that while there has been a lot of initial inertia from the previous technologies that make it hard to change, it's also true that the new technologies have failed to make large enough gains to warrant the pains of changing - this is precisely the demise of Plan9: it's not that it wasn't better it just that it wasn't better enough to warrant the huge expense of replacing old, working systems.
|
|
|
| |
"This article was written hastily, and I don’t want to further improve it. You’re lucky I wrote it. Therefore, I may provide some facts without source links."I guess I am lucky he wrote this for a pleb like me.
|
|
|
| |
"Taking into account the numerous mistakes of UNIX. However, no one raises Plan 9 on a pedestal."Suckless and cat-v.org would disagree. I'd also disagree since I'm a huge fan of plan9port.
|
|
| |
The Plan 9 FS network protocol, P9, is extremely well regarded and the basis for a number of security projects.
|
|
| |
He lost me right at the beginning, with "I guess there wasn’t even Microsoft DOS at the time (I guess and I don’t bother to check, so check it yourself).".This is the OS equivalent of a Call of Duty teenage online player on XBox live.
|
|
| |
Am I the only one's thinking that this article is full of errors ? Starting with "killing zombie process" (sic!) and so on ?
|
|
| |
I do think we lost of the "do one thing and do it well" https://en.m.wikipedia.org/wiki/Unix_philosophyPart of it is that it's way more annoying to maintain a little tool than to make it part of a bigger project with a much bigger community, it's also far harder to discover than just being in the docs a bigger project
|
|
| |
Although I agree that Unix is a big collection of hacks and well past its prime, the author displays several fundamental misconceptions of what he's talking about. Here's a few examples:- Dirty hacks in UNIX started to arise when UNIX was released, and it was long before Windows came to the scene, I guess there wasn’t even Microsoft DOS at the time (I guess and I don’t bother to check, so check it yourself). At least he acknowledges that he's being incredibly lazy, and he shows the glimmer of an understanding as to why some of the things mentioned later happened: because Unix is from the early 70s, which were a very different time in computing & culture. - Almost at the very beginning, there was no /usr folder in UNIX. All binaries were located in /bin and /sbin. /usr was the place for user home directories (there was no /home). Putting /home on a separate partition remains a pretty common thing to this day because users will tend to have greater storage requirements than just the root. /usr/bin and the like are the result of people realizing that this secondary larger disk is an acceptable place to put binaries and other files that aren't needed at bootup. - In other words, if you’ve captured Ctrl+C from the user’s input, then the operating system, instead of just calling your handler, will interrupt the syscall that was running before and return EINTR error code from the kernel. That's not the kind of interrupt they're talking about. - I’ve read somewhere that the cp command is called cp not because of copy but because UNIX was developed with the use of terminals that output characters very slowly. Yep, terminals that print on paper are pretty slow, as are 300 baud modems. I'm absolutely crushed I had to learn that 'cp' means 'copy'--it took hours to beat that into my head, and the thousands of keystrokes I've saved over the years are a small comfort (except to my rsi-crippled hands) - The names of UNIX utilities is another story. For example, grep comes from command g/re/p in the ed text editor. Well, cat comes from concatenation. I hope you already knew it. To top it all up, vmlinuz — gZipped LINUx with Virtual Memory support. 'cat' comes from 'catenate', in fact. What would you name 'grep' instead? "searchregexandprint"? - at least the main website of C that would be the main entry point for all beginners and would contain not only documentation but also a brief manual on installing C tools on any platform, as well as a manual on creating a simple project in C, and would also contain a user-friendly list of C packages This is one of the most ridiculous ones. You're talking about a programming language defined in the 70s, for Christ's sake. Lot of websites created in the 70s? There is a document with a good introduction to C, project examples, etc. and it's call The C Programming Language, a book by K&R. When Kernighan made another language a few years ago, yeah, he made a website for it--golang.org, it's one of the best project sites I've seen. The article points out some legit problems in Unix, but even leaving aside the author's ESL challenges it's poorly-written, poorly thought-out, and poorly-defended.
|
|
| |
Totally agree. The article doesn't show deep understanding of the actual problems.- make's TAB "problem" as very first argument is not very convincing. - The citations to back up the claim that a (possibly binary) registry database was better than small text files are just not backing up. There is nothing to defend a registry there. The quote is about fsync semantics which has nothing to do with registry. Btw. in my perception it's a widely accepted fact that a big-pile-of-crap database is a bad idea. And oh, I haven't ever heard of any problem with passwd/group/shadow/gshadow being text files. And if there were, the access method is actually abstracted away, it's easy to switch backends to something else (NSS). (there is a problem with these files though -- they are denormalized, and not all fields have clear meaning and some programs interpret some fields in weird ways.). - Zombie processes. What's the problem there? They are just like file handles. Handles have to be closed before they are garbage collected. The actual problem is that you can't really "open" and "close" processes, only spawn new childs and the resulting hierarchy is not typically desired. - "We call touch in the loop! This means there is a new process for each file. This is extremely inefficient." Yeah and why exactly is the shell to blame that you use touch in a loop? (apart from the fact that it's almost certainly not a problem). Could go on but have to leave...
|
|
| |
> What would you name 'grep' instead?Yeah, I found that one an especially weird gripe. Grepping was a new thing, so we needed a word for it. 'Grep' is short, easy to say and type, and relatively hard to confuse with similar words in the domain. Works for me. I can unfortunately imagine a modern startup implementing it, and shudder at potential names my imagination is coming up with... Searchlr, the best way to search text! ReadMonkey, your personal pattern recognizer! I'll stop now.
|
|
| |
Couldn't they just have called it google??
|
|
|
| |
So how do you know what type of search you're doing? There were existing text matching algorithms at the time."No, I mean the kind of car that is really big, that you drive on ice rinks to smooth out the ice."
|
|
| |
Search what? File names? File contents? Users? Machines?
|
|
| |
You could say the same about "mv". Move what? Files? File names? File parts? Users? Machines? Screens?There's always some default subject implied for every command name. For "find" it is files, for "search" it could have been text.
|
|
| |
> You could say the same about "mv". Move what? Files? File names? File parts? Users? Machines? Screens?Files, the base type that's consistent across all the basic commands (AFAIK).
|
|
| |
It depends on what you mean by "basic command". There are plenty which take arguments that aren't files - chown, chgrp and su, for example.
|
|
| |
> - Almost at the very beginning, there was no /usr folder in UNIX. All binaries were located in /bin and /sbin. /usr was the place for user home directories (there was no /home). Putting /home on a separate partition remains a pretty common thing to this day because users will tend to have greater storage requirements than just the root. /usr/bin and the like are the result of people realizing that this secondary larger disk is an acceptable place to put binaries and other files that aren't needed at bootup.The author cites to this post by Rob Landley – http://lists.busybox.net/pipermail/busybox/2010-December/074... While I can't independently confirm Rob's claims, and he doesn't provide any citations, I do find them very believable – /usr was invented at Bell Labs because they were running out of space on their puny 1970s hard disks. (And an RK05 was small even by 1970s standards – the IBM 2314 mainframe hard disk, released in 1965, had a 30MB capacity; the IBM 3330, released in 1970, stored 100MB – of course, these disks would have cost a heck of a lot more than an RK05, and were likely not feasible for the UNIX team given their budget.) If they had bigger disks (or the facility to make multiple smaller disks appear like one big disk) – it is less likely they would have split the operating system itself across two disks (/ and /usr). (Using separate disks for the OS vs user data was more likely even with bigger disks since that was common practice on systems at the time.) (Some other operating systems from the same time period already had some ability to make multiple disks appear like one big disk. For example, OS/360 has the concept of a "catalog", which is a directory mapping file names to disk volume names; this means you can move individual files between disks without changing the names by which users access them. In their quest for simplicity, Thompson and Ritchie and co decided to omit such a feature from UNIX.)
|
|
| |
If you look at Plan 9, you'll see /usr is once again the location of user home directories, and binaries go in /bin. In fact, /bin is a 'union' directory composed of /386/bin, /rc/bin, /usr/jff/bin, and any other places you've decided to put binaries.
|
|
| |
> because Unix is from the early 70s, which were a very different time in computing & culture.Lisp, Smalltalk, Mesa, Pilot OS and Xerox Development Environment are from the early 70s as well (Lisp even earlier).
|
|
| |
For a interesting critique of Unix the "Ghosts of Unix Past" series of articles on LWN is pretty interesting.https://lwn.net/Articles/411845/ The whole series is worth a read, especially part 3 'unfixable desgins' which talks about Signals and the Unix permission model.
|
|
| |
Evolution is never clean.
|
|
| |
Evolution does what evolution does. The problem is that selection pressure is too weak, and things that should die off survive and flourish.
|
|
| |
Isn't that contradictory? If you start with the assumption that it's evolutionary, how does it make sense to judge whether selection pressure is "too weak" and that things "should die off?" Selection pressure is what it is.
|
|
| |
Selection pressure may or may not be in a feedback loop with the evolutionary process, but you can still view it as a separate component. In case of computing, the (broadly understood) market is the selection pressure. As for the notion of what should happen, this comes from humans who are capable of thinking about the evolutionary process and who value some goals over others. In particular, those humans tend to notice that the selection pressures in software industry do not promote good, efficient, and well thought-out solutions.
|
|
| |
I've always wondered why a new open-source OS has not arisen to claim the mantle of UNIX / Linux. There are so many ancient design decisions, piled up upon each other, which spawn millions of man-hours of frustration. It continues perhaps because of its esoteric nature. People don't want to throw away all the weird stuff they've learnt.
|
|
| |
And also, because backwards compatibility matters once something is deployed on as large scale as Unix was.(Windows also has a similar problem.)
|
|
| |
Maybe a parallel system needs to be implemented, like how OS X originally had a Classic OS 9 system running alongside. Once everyone had come to OS X, the classic environment was dropped.
|
|
| |
> I guess there wasn’t even Microsoft DOS at the time (I guess and I don’t bother to check, so check it yourself)1969 < 1981.
|
|
| |
Heck, in 1969 there wasn't even CP/M that MS-DOS was modelled after.Heck, in 1969 there wasn't even RT-11 that CP/M was modelled after. There was a brand new OS/8 that RT-11 was modelled after.
|
|
| |
For those of you who might be fooled into thinking this is about things like SystemD and Docker—it's not!
|
|
| |
More a reprise of the _Unix Hater's Handbook_ with a slightly different gloss, for those who remember that.
|
|
|
| |
That was also my first thought.
|
|
| |
A lot of the author's criticisms are about UNIX utilities not gracefully handling filenames containing special characters. But seriously, who puts a newline in a filename? Instead of wasting time writing scripts that meticulously handle all possible edge cases, I'd much rather fix whatever broken process is putting control characters into file names.
|
|
| |
"Utilities" should not have to gracefully handle troublesome filenames, each one re-inventing the wheel.If the system as designed accepts almost literally any characters in pathnames, it doesn't make a lot of sense to complain about people using "naughty" (troublesome) characters in pathnames they create. It would have made a heck of a lot more sense to have disallowed stupid characters (all white space and control characters at a minimum) from the outset. However, changing it at this point is pretty impractical. Of all the criticisms of UNIX, the singular one that strikes me as valid is the way that it accepts unquestioningly insane characters in pathnames. I would also maintain that case sensitivity in pathnames and a lot of other places is also a human interface error. It is arguable, and a matter of taste, but I find it obtuse. You should be able to "say" pathnames. The necessity for circumlocutions like "Big-M makefile" is offensive. I'm one of the biggest fans there is of UNIX and C and the Bourne shell and derivatives, but recognition of weaknesses and flaws in those you love is not weakness; it is wisdom.
|
|
| |
> Of all the criticisms of UNIX, the singular one that strikes me as valid is the way that it accepts unquestioningly insane characters in pathnames.In fairness, though, allowing non-ASCII characters is what enabled the (mostly seamless) transition to UTF-8 filenames.
|
|
| |
"Doctor, it hurts when I do this.""So don't do it." The problem with filenames is just a symptom of the biggest problem of UNIX conventions - passing around unstructured text. Filenames should have one well-defined format (AFAIR kernel allows pretty much anything but the NULL character). That's it. For most applications, filenames should be opaque data blobs compared for binary equality. But because we're passing around unstructured text, each program has to parse, reparse, and concatenate strings via ad-hoc, half-assed shotgun parsers. Each program does it slightly differently, hence the mess.
|
|
| |
When the design was made, no one was considering pathology. We were all too invested in the wonder of making it all work to worry about people screwing around in crazy ways, let alone purposeful attacks.As long as everyone recognized that putting certain characters in pathnames was counter-productive, things worked fine. Nobody ever dreamed of putting a space character in a filename when they all came from a CLI background. When barbarians came from Mac/Windows to UNIX, they lacked this background, and there went the neighborhood. I remember being stunned when I first encountered two periods in one filename! But it only took me a few seconds to grasp the fact that period was just another character, "suffixes" were just conventions, and it all made perfect sense. OTOH, the first time somebody showed me a file named "-fr\ \*" and suggested I delete it, I got one of my first disillusionments. P.S. - "/" is a character which is also "special" in pathnames. Actually, the particular FS may make other exceptions which are not globally enforced by the kernel. ZFS has several (such as "@").
|
|
| |
When the design was made, no one was considering pathology. We were all too invested in the wonder of making it all work to worry about people screwing around in crazy ways, let alone purposeful attacksShortly after timeshare systems were created, people were thinking about security.
|
|
| |
> When the design was made, no one was considering pathology. We were all too invested in the wonder of making it all work to worry about people screwing around in crazy ways, let alone purposeful attacks.I could buy this if not for the fact that back when UNIX was created, there were already better operating systems and sane solutions to those issues existed. It's more like that those aspects simply weren't really thought through, but instead just hacked together. Contrary to what seems to be a popular opinion nowadays, UNIX wasn't the first real operating system, just like C wasn't the first high-level programming language. I knew I actually believed the latter, due to the way C/C++ many books were written. But no, in both the worlds of programming and operating systems, there already were better thought-out solutions. It's a quirk of history that UNIX and C ended up winning.
|
|
| |
"But no, in both the worlds of programming and operating systems, there already were better thought-out solutions."There were differently thought-out solutions, but not necessarily better or worse. Perhaps the issues they thought out didn't matter as much at the time and very unlikely to matter even a little bit today, and who knows how much they got wrong and much worse. It's very hard to speculate. But one thing I'm sure of is that these things cannot be really well designed from scratch and all the problems manifest only once they are used by people. So widely used systems can only be compared to widely used systems, not something niche or unused.
|
|
| |
Sure, but we're not talking about niche systems. There was a whole flourishing world of computing before Unix and C came to be. In fact, a lot of significant theoretical and practical advancements came from that age.Our industry does seem to be stuck in circles, continuously forgetting the ideas of past cycles and reinventing them, only for them to be forgotten again. To see that phenomenon in action, one does not have to look much further than the last 10-15 years of history of JavaScript to see how the web ecosystem basically slowly reinvented already long established practices from desktop operating systems and GUI toolkits...
|
|
| |
Of note for folks who don't know it, Common Lisp treats pathnames as structured objects. It's pretty nice, although it'd have been even nicer had they more-fully-specified them with respect to e.g. POSIX filename conventions (e.g. is /foo/bar a directory or a file?). UIOP basically fixes all this, and it works.
|
|
| |
Yeah. CL's pathnames are a bit confusing to people though, because they came about as a way to handle several different types of file paths from various operating systems which preceded UNIX / POSIX.For those who're interested, there's a pretty good overview of them in Practical Common Lisp[0]. I'll quote the first paragraph of the subchapter "How Pathnames Represent Filenames", which serves as a decent TL;DR: "A pathname is a structured object that represents a filename using six components: host, device, directory, name, type, and version. Most of these components take on atomic values, usually strings; only the directory component is further structured, containing a list of directory names (as strings) prefaced with the keyword :absolute or :relative. However, not all pathname components are needed on all platforms--this is one of the reasons pathnames strike many new Lispers as gratuitously complex. On the other hand, you don't really need to worry about which components may or may not be used to represent names on a particular file system unless you need to create a new pathname object from scratch, which you'll almost never need to do. Instead, you'll usually get hold of pathname objects either by letting the implementation parse a file system-specific namestring into a pathname object or by creating a new pathname that takes most of its components from an existing pathname." [0] - http://www.gigamonkeys.com/book/files-and-file-io.html
|
|
| |
This demands the question - why allow control characters in file names at all? Non-printing characters?
|
|
| |
There simply is no good reason. I have had this discussion, and there is a group of people who consider it "unclean" to rule out any characters.There is an excellent discussion of the topic[0]. I find it utterly definitive in the way it relentlessly shows how you can't "fix" this issue completely any other way than by ruling out the bad characters by making the kernel disallow them. [0] https://www.dwheeler.com/essays/fixing-unix-linux-filenames....
|
|
| |
Ruling out "bad" characters is bound to affect internationalization negatively.IMO the best approach would be to separate between file name and the file object. When I edit a file with vim, should vim really need to know the name of the file? No. Likewise for a lot of other utilities as well. If instead of being so focused on file names and paths everywhere and we operated instead mainly on inodes then I think much would have been won. Now in some instances the file name is of interest to the program itself, for example if you attach a file to an e-mail, upload it with a web browser, tar a directory, etc. but in all of these instances I think that the file name should be more separate and even for most programs that want the file name they should just treat the file name as a collection of bytes that have close to no meaning. In other words, I would want to translate paths and file names into inodes in just a very select few places and then keep them separate. This is what I am going to do in my Unix-derived operating system. I will get around to implementing said system probably never but you know, one can dream.
|
|
| |
"Bad" characters in this context is control characters. So no, it would not affect internationalization at all.
|
|
| |
Where everything is a file handle, except when it is not (sockets, IPC, ...).
|
|
| |
Note: A “file handle” is a FILE *, i.e. a stream as used by a lot of the higher-level functions of the C library. The term you were looking for is probably “file descriptor”. (Or possibly “inode”, “directory entry”, or “file name”? It’s not entirely clear what you mean.)
|
|
| |
I read once that the dd command (which stands for convert and copy) was not named cc because the compiler was already called that so they used the next letters in the alphabet.NAME
dd - convert and copy a file
|
|
| |
No, it came from data definition on OS/360 JCL which is the reason why it has a different syntax than the other utils.
|
|
| |
> You’re lucky I wrote it.I read this, and was like "screw that, I'm not going to read it," and then was like, well let's see, and after reading, wish I had stuck with my initial gut instinct to close that tab. smh.
|
|
| |
I'll wait for the satirical summary of this HN thread. :-)
|
|
| |
Most of the author's criticisms around the Unix Philosophy™ (aside from perhaps the performance aspect) would be solvable in two steps:1) Standardize on some structured text serialization format (I like YAML for this) 2) Write a new shell Both of these things are compatible with the Unix Philosophy™, and thus said Philosophy is nowhere near collapse. Rusty around the edges, sure, and maybe with some asbestos in the ceiling tiles, but certainly renovatable. The philosophy is already prevalent in the world of "microservices"; an application is split into a whole bunch of independent (usually containerized) programs communicating via something like JSON over HTTP.
|
|
| |
2) I'm writing a new shell! http://www.oilshell.org/blog/1) This is an appealing idea, but my claim is that there's no single serialization format that will work. (Or if there is one, it has yet to be invented.) More detail here: https://www.reddit.com/r/oilshell/comments/5x5rgg/pipes_in_p... There's nothing stopping anyone from using structured data over pipes, but I think it's a mistake to assume there will be or needs to be a "standard". 3) I agree that JSON over HTTP is very much in the vein of Unix. The REST architecture has a very large overlap with the Unix philosophy -- in particular, everything is a hierarchical namespace, and you have a limited number of verbs (GET / POST vs. read() / write() ).
|
|
| |
One format for all use cases? Databases (passwd,group...), single-word files, key-value(-list?) files, rc files for a thousand programs?Great idea! We should use XML for that...
|
|
| |
XML, YAML, JSON and s-expressions are all just flavours of representing trees.So yeah, any of that would be a much better idea than unstructured text, and yes, you can serialize all those use cases into trees. I'd steer away from XML for sake of efficiency and human-readability though.
|
|
| |
I meant that more in terms of IPC (e.g. in pipes), but most of those use cases happen to be adequately handleable by YAML specifically, so yeah, why not?
|
|
| |
Ideally a simplified subset of YAML;
the full specification suffers from feature creep and has actual bugs.
|
|
| |
It does not fix the thing right below the shell: the terminal.The terminal is responsible for hacks upon hacks: Colors, ncurses, signals, and whatnot.
|
|
| |
> Some people think that UNIX is great and perfect"...great and perfect" is a strawman. Whether "some people" think that is irrelevant. Some of this article is interesting, but the fact of the matter is 40-year-old systems have signs of being 40 years old. If "fixing" everything were easy, it'd be done. Tabs in Makefiles throw off the uninitiated for 10 minutes, then they learn, shrug and move on. These scars and stories are part of the package. Reading further, some of this is just incorrect... >That’s not to mention the fact that critical UNIX files (such as /etc/passwd) that are read upon every (!) call, say, ls -l, are plain text files. The system reads and parses these files again and again, after every single call! Not on my system. > It would be much better to use a binary format. Or a database. On my system, it is ( running "ls -ld ."): kamloops$ uname -a
NetBSD kamloops 7.99.64 NetBSD 7.99.64 (GENERIC) #26: Thu Mar 2 07:15:26 PST 2017 root@kamloops:/usr/src/sys/arch/amd64/compile/obj/GENERIC amd64
kamloops# dtrace -x nolibs -n ':syscall::open:entry /execname == "ls" / { printf("%s -%s", execname, copyinstr(arg0));}'
dtrace: description ':syscall::open:entry ' matched 1 probe
CPU ID FUNCTION:NAME
0 14 open:entry ls -/etc/ld.so.conf
0 14 open:entry ls -/lib/libutil.so.7
0 14 open:entry ls -/lib/libc.so.12
0 14 open:entry ls -.
0 14 open:entry ls -/etc/nsswitch.conf
0 14 open:entry ls -/lib/nss_compat.so.0
0 14 open:entry ls -/usr/lib/nss_compat.so.0
0 14 open:entry ls -/lib/nss_nis.so.0
0 14 open:entry ls -/usr/lib/nss_nis.so.0
0 14 open:entry ls -/lib/nss_files.so.0
0 14 open:entry ls -/usr/lib/nss_files.so.0
0 14 open:entry ls -/lib/nss_dns.so.0
0 14 open:entry ls -/usr/lib/nss_dns.so.0
0 14 open:entry ls -/etc/pwd.db
0 14 open:entry ls -/etc/group
0 14 open:entry ls -/etc/localtime
0 14 open:entry ls -/usr/share/zoneinfo/posixrules
kamloops# file /etc/pwd.db
/etc/pwd.db: Berkeley DB 1.85 (Hash, version 2, native byte-order)
Now, I see that /etc/group -is- a plain file. This could get the same treatment as /etc/passwd if it becomes a burden. In the meantime, if it's a performance bottleneck, make a memoizing function to lookup groups and use a '-n' switch to ls. Article is probably mostly important as the author thinking deeply about Unix, and part of the developmental process of the user....All the bluster (some of which is interesting), then at the end walks it back: > So, I do not want to say that UNIX – is a bad system. I’m just drawing your attention to the fact that it has tons of drawbacks, just like other systems do. I also do not cancel the “UNIX philosophy”, just trying to say that it’s not an absolute. Shame about the title... But maybe that's what landed it here on HN (?) Edit: explain the "ls" command actually run.
|
|
| |
"This could get the same treatment as /etc/passwd if it becomes a burden. In the meantime, if it's a performance bottleneck, make a memoizing function to lookup groups and use a '-n' switch to ls."Exactly, and this is the sort of thing that can be done with open source software. It may not even be a lot of code depending on how it is approached.
|
|
| |
It's not even clear that for small password files, scanning /etc/passwd is any slower than a database. It's likely already in memory, and a full scan of a few kilobytes of text in highly optimized C is likely to take only microseconds.
|
|
| |
Regarding 'ls', I ran "strace ls" on Linux and didn't see any open on /etc/password; it's important that you run 'ls -l', which should trigger the /etc/password access because it has to map UIDs to usernames.
|
|
| |
re: "ls -l" understood. I ran "ls -ld ."
|
|
| |
Ah, I'm not familiar with dtrace so I guess I missed that in your comment. Thanks for the clarification.
|
|
| |
Understandable. The dtrace command you see is tracing a process "ls" (regardless of switches). I didn't show my work (ls -ld .) that generated the output, so your confusion isn't unfounded. :)
|
|
| |
TLDR - things have names, isn't that terrible?
|
|
| |
"You’re lucky I wrote it."No Mr. Askar Safin. YOU are lucky that I am reading it.
|
|
| |
This is an absolute garbage tier article. What on earth is it doing on the HN front page? The author has no clue what he's talking about (and admits it several times) and makes many logical leaps (one time the author said Gnome should have used a registry, so UNIX configs are bad?) all over the place.
|
|
| |
Prefer an original older funnier version : http://harmful.cat-v.org/software/operating-systems/linux/I suspect the author of this site have contributed to the linux haters handbook. Some quotes: “Linux printing was designed and implemented by people working to preserve the rainforest by making it utterly impossible to consume paper.” – Athas “ALSA is like the emperors new clothes. It never works, but people say it’s because you’re a noob.” “Object-oriented programming is an exceptionally bad idea which could only have originated in California.” – Edsger Dijkstra “[firefox] is always doing something, even if it’s just calculating the opportune moment to crash inexplicably” – kfx ....
|
|
| |
I'd like to posit that at least part of the problem is we're using a multi-user operating system on machines that are universally single-user. While it may be nice to have that option, is it realistic to think that many people are going to hand over their laptop/desktop machine to another person?We're trying to use a server OS on a single-user machine, complete with all the management cruft that comes along with a server-based OS. Consequently, we don't bother re-thinking what the user needs vs. a sysop.
|
|
| |
Windows did that, and look at the mess that got them into. Eventually they had to introduce UAC, essentially a simplified Unix-style permissions system.
|
|
| |
Yeah, we need to abandon this complicated Multics and create a single-user OS! Let's call it something with "uni" in it - maybe "Unix"?
|
|
| |
> [shell] It becomes especially bad when we try to develop in it, as it’s not a full-fledged programming language.Who the hell 'develops' in shell? It's a glue language, not a development language. I've never heard anyone say "We're a shell shop".
|
|
| |
It's a glue language, but it's still painful when you write a shell script.
|
|
| |
I spent a year working on ~80k lines of bash. It was an interesting experience.
|
|
| |
You must tell us more :-) For my anecdote, I know people who did 10k+ line final year university projects in AWK and Tcl respectively. The AWK one was a particular act of endurance.
|
|
| |
Story time. I'll go get the popcorn.
|
|
| |
Was the primary product in bash? Or were you working on glue?You must have mastered the bizarreness of bash arrays by the end of that :)
|
|
|
| |
^^^ Open incognito if you don't want to see jwz's NSFW salutations to HN readers...
|
|
| |
copying the link and pasting it in a new tab is sufficient -- it's just looking at the http-referer header.
|
|
| |
Funny that the parent was flagged, even though the link is valid, and the parent's author is not at fault.Oh! The beauty of censorship by the masses. Now I can see why jwz would choose to take a stab at HN. He may be right, and even too kind. That flag is just sad...
|
|
| |
As a sibling comment points out, the submission actually quotes from Richard Gabriel's Worse is Better. That, and the snarky tone of the comment may have been what caused people to downvote and/or flag the comment. 'swolchok could also have linked to the original, thus avoiding jwz's HN-referrer redirect.
|
|
| |
I know all that, but if reposts or not reading articles was deserving of censorship, there'd be quite a lot of [flagged] placeholders around here.
|
|
| |
The author is in fact quoting from "Worse is Better". RTFA.
|
|
| |
You might want to check that link again with a HN referer! Interesting results :)
|
|
|
| |
Please don't do this here.
|
|
| |
"Have you ever kissed a girl?"
|
|
| |
Exactly. Waayyy TMII (irrelevant).If the bottom-up compositional model for computing that largely originated with Unix is fading, this article doesn't go there, or suggest cause. Now, that's an article I'd like to read...
|
|
| |
I wonder why the original comment you responded to got flagged. I read it and it was one of the best comments?
|
|
| |
Because either people didn't get the reference, or fanboys all over the originator...
|
|
| |
from my cold dead(maxTimeToLive < 50y) hands!
|
|
| |
The beef over everything is text creating performance issues, use a binary format instead! seems bonkers.Every time there's a post like this it's all about performance!, performance! I like UNIX not because of any technical reason, but because it's easy for a human to use. Linux and UNIX are not meant to be "perfect" from a CS standpoint. They're meant to be easy for people consumption. I can't read a binary format. I can read text. That is the real point of the UNIX philosophy, IMO. Do one thing, do it well. Trying to solve too many problems at once will bite you.
|
|
| |
The links between ideology, communism, negative thinking, and UNIX run deep. It has been said that UNIX is the only operating system that grows on you. The ability of UNIX to tolerate negative thinking and poorly-behaved programs is its strength.Please note that the Google result for "UNIX philosophy" names a GOOGLE EMPLOYEE as the "originator of unix philosophy" without disclosing the conflict of interest. This not only evil but very stupid. When multibillion dollar companies get involved in altering history, watch out.
|
|
| |
IMHO, a lot of the Unix philosophy boils down to: we do it this way because the neckbeard of my father and the neckbeard of his father and all the neckbeards before him unto the beginning of Unix on January 1, 1970 did it that way and if you want to do it differently, well, you better have a stronger neckbeard, because then we'll have two problems instead of just one.
|
|
| |
... i.e. a combination of "It's working, so let's not change it" coupled with a lack of the sort of market pressure having your OS compete in a money-changes-hands we-have-customers marketplace brings to the table to have a holistic, easy-to-convey user experience.
|
|
|
| |
Heh, are you religious and/or believe in left/right/conservative/liberal/Dem/Rep? Cause that would make your comment hilarious.
|
|
| |
pretty much, yes.i'm still to see any complainer grown an adecuate amount of facial hair, though. (incompetently) reinventing some bad idea from before you were born does not cut it, you know?
|
|
| |
My favorite one “everything is a file”.My GPU has 4 times as many transistors as my CPU, and for parallel tasks, it computes stuff 50 times faster. Just too much complexity for a file, even with ioctl. I think that ideology is the main reason for the current state of 3D graphics in nix and bsd based platforms.
|
|
| |
I think the problem is that Linux developers started abusing the Unix Philosophy to the point that you had to know about a lot of different programs in order to be productive - Sometimes, it's much more convenient if one program can do everything that you want it to do out of the box (it requires less understanding of the system).The Unix philosophy is essentially the opposite of the Apple philosophy. It gives you flexibility and composability at the cost of simplicity and the overall experience. The optimal solution tends to be somewhere in-between. If you look at Linux, it's actually a monolithic system (which goes against the Unix philosophy); the popularity of Linux is in itself proof that people do want a single cohesive product - If the Unix philosophy was the best approach, we'd all be using Minix by now.
|
|
| |
I didn't find MacOS to be any simpler than newbie-oriented Linux distros like Mint and Ubuntu. It was just filled with a ton of proprietary, bastardized, closed-source garbage and limits that made it more difficult for power users to understand and effectively manage the system.
|
|
| |
To be fair Linux distros have improved a LOT in the past 5 years. When I first used Ubuntu many years ago, you couldn't do anything without the command line. Installing software was a pain (I had tons of problem with Ubuntu Software Center and it never seemed to work).I use Ubuntu (Gnome) these days. The only thing I miss from Windows is Windows Explorer. Nautilus just doesn't cut it in my opinion; I always end up browsing the file system with the command line. That said, I still prefer Nautilus over OSX's Finder.
|
|
| |
"Installing software was a mess."Of all the things to complain about in Linux, you choose the one thing that Mac OS and Windows still don't have right, and Linux had pretty good even back then?
|
|
| |
Re: macOS: I'm having a hard time seeing how copying a directory with everything necessary contained in it is not a good install procedure. Install a program: copy to the Applications folder. Uninstall a program: Cmd-Delete or drag to trash.Compare to Linux, where a piece of software is scattered around /usr/bin, /usr/lib, /usr/share, /usr/doc. (Or /usr/local/*, you never know which) Oh, and those fun times where something depends on a libxxx.so.N but all that's on the system is libxxx.so.N.M.O and libxxx.so.N.M for some reason, so you have to make the symlink yourself. Or the distribution has a minimum of version N+1, so your option is to find the source for the library, figure out all the -devel packages it needs, and compile it up (hopefully), or just symlink libxxx.so.N to libxxx.so.N+1 and hope it works. And then the fun of figuring out what the package is named. pdf2text lives in poppler, who would have thought. Need gcc? That will be build-essential on Ubuntu last time I needed it. (Not build-essentials, either)
|
|
| |
So, a tarball full of statically linked binaries. You can do that on Linux, too.And, there are some new package managers that isolate in this way (and go well beyond it by containerizing). Flatpak is probably the most promising, IMHO. And, it still provides all the benefits of a good package manager, like verification, authenticity, downloading automatically from a repo, dependency resolution for core libraries. And, the way Flatpak handles the latter feature is really quite cool (and avoids having to distribute dozens of copies of the same libs). Your description of installing packages on Linux does not match my experience in the past decade. Dependency resolution is a solved problem on Linux, at least on the major distros with good package managers.
|
|
| |
Installation on Windows was always much easier; all programs had a relatively consistent UI wizard that stepped you through the installation process. Installing software from disks on Windows was really convenient (and disks where the real deal back then).The fact that Linux relied on people to install stuff with the command line was a massive oversight. UIs are just way more intuitive than shell commands.
|
|
| |
"Installation on Windows was always much easier"I've rarely disagreed with something said on HN so strongly (at least among things that, in the grand scheme of things, really don't matter that much, but they matter a lot to my personal experience). "The fact that Linux relied on people to install stuff with the command line was a massive oversight. UIs are just way more intuitive than shell commands." This has never been true in the past 12 years. You have to go back even further to find a time when there weren't multiple GUIs for the leading package managers. And, for at least the past decade, the core GUI experience on every major Linux distro has had some sort of "Install Software" user interface that was super easy and provided search and the like. There's lots of things Linux got wrong (and some that it still gets wrong) that Windows or macOS got right. Software installation really just isn't one of them, IMHO. It's the thing I miss most when I have to work on Windows or macOS, and I miss it constantly...like multiple times a day. A good package manager is among the greatest time savers and greatest sources of comfort (am I up to date? do I have this installed already? which version? where are the config files? where are the docs? etc.) when I use any system, particularly one I haven't seen in a while. I just really love a good package manager, and Linux has several. Windows and macOS have none (because if the entire OS didn't come from a package manager, it's useless...you can't know what's going on by querying the package manager, if the package manager only installed a tiny percentage of the code on the system). So, even though there's choco on Windows and Homebrew (shudder...) on macOS, they are broken from the get-go because they are, by necessity, their own tiny little part of the system with little awareness or control over the OS itself.
|
|
| |
Why don't you like homebrew?Also, if your problem with non-Linux package managers is that they only know about and control their own packages, then you must have the same objection to Nix and Guix, right? What happened to wanting simple tools that do one thing and one thing right? Don't we want package managers to only manage packages, to decouple them as much as possible from the rest of the operating system, and leave system configuration management to other tools?
|
|
| |
"Why don't you like homebrew?"I've blogged about some of my problems with Homebrew. Generally speaking, Homebrew is a triumph of marketing and beautiful web design over technical merits (there are better options for macOS, but none nearly as popular as brew). The blog post: http://inthebox.webmin.com/homebrew-package-installation-for... I get that it's easy and lots of people like it, so I mostly try to hold my tongue, but every once in a while I'll see someone suggest something crazy like using Homebrew on Linux (where there is an embarrassment of good and even great package management options) and it makes me shudder. I'm not saying don't use Homebrew on your macOS system if it makes your life easier. I just would never consider it for a production system of any sort. I'm even kinda mistrustful of it on developer workstations (though there are plenty of similarly scary practices in the node/npm, rubygems, etc. worlds, so that ship has kinda sailed and I am resolved to just watch it all unfold). "What happened to wanting simple tools that do one thing and one thing right?" I still want that. Doing one thing right in this case means doing more than what packages on macOS or Windows do. One can argue about the complexity of rpm+yum or dpkg+apt, and it's likely that one could come up with simpler and more reliable implementations today, but if you want them to be more focused, I have to ask which feature(s) you'd remove? Dependency resolution? That one's a really complicated feature; a lot of code, and it's been reimplemented multiple times for rpm (up2date, yum, and now dnf). Surely, we can just leave that out. Or, perhaps the notion of a software repository? Is it really necessary for the package manager to download the software for us? I mean, I have a web browser and wget or curl. Verification of packages and the files they install, do we really need it? Can't we just assume that our request to the website won't be tampered with, and that what we're downloading has been vouched for by a party we trust? I dunno...I'm not really seeing a thing we can get rid of without making Linux as dumb as macOS or Windows. "Don't we want package managers to only manage packages, to decouple them as much as possible from the rest of the operating system, and leave system configuration management to other tools?" This is the strangest question, to me. Why on earth would we want the OS outside of the package manager? Why would we want to only verify packages that aren't part of the core OS? This is why Linux is so vastly superior to Windows and macOS on this one front. I'm having a hard time thinking of why having the package manager completely ignorant of the core OS would be a good thing. What benefit do you believe that would provide? And, NixOS does not meet the description you've given. The OS is built with nix the package manager. Running nix as a standalone package manager on macOS does have the failing you've mentioned, but that's not the fault of nix. And, yes, nix is a better option for macOS than brew, but the package selection is much smaller and not as up to date in the general case...so maybe worse is better, in that case. I get a bit ranty about package management. I spend a lot of time working with them (as a packager, software builder, distributor, etc.) and have strong opinions. But, I believe those strong opinions are backed by at least better than average experience.
|
|
| |
> all programs had a relatively consistent UI wizard that stepped you through the installation processNo where near as consistent as installing from a package manager. > The fact that Linux relied on people to install stuff with the command line was a massive oversight. UIs are just way more intuitive than shell commands. Every user orientated distro has come with a GUI package manager for at least a decade, probably two.
|
|
| |
It's not using the command line that sucks about installing software.It's that there are so many different standards for installing software, overlapping in sometimes conflicting ways.
|
|
| |
Perhaps the issue when comparing Windows and Linux is more one of "dependency management" than just merely installing applications or libraries? Although I've had issues with package managers screwing up dependencies in the past, it hasn't happened in a while and when it would have happened I was warned beforehand.
|
|
| |
Windows has always been fine for _installing_ software.It's when you go to uninstall or upgrade it that you realize what a mess it is.
|
|
| |
Installing software from the distro is easy on Linux, but otherwise it's potentially a problem. Whereas Windows has no "distro" but the .msi system works quite well.
|
|
| |
Pity Microsoft stopped bothering with .msi installers 10 years ago with Office 2007, and didn't bother to add any free ways to deploy all the new stuff they invented.
|
|
| |
For the average user, the worst install on Mac OS is drag icon from dmg into Apps folder. The other type is double clicking a package.
|
|
| |
Large monolithic apps can be an ecosystem in their own right that is broadly comparable to an OS. And it is common to find apps that adopt Unix philosophy out of necessity within the ecosystem. This is not obvious to an outsider, and requires an understanding of the domain.A good example is ArcGIS which on the surface is ridiculously monolithic. But within the toolbox function are several hundred programs that do only one thing and are composable. This approach is also seen in video or image editing workflows where a user works with a particular set of tools. The main difference is that the programs use a types system that is appropriate to the domain rather than just text. The OS only really exposes an interface for working with OS level objects. That sometimes aligns to a workflow but not always. And we should not expect disciplines to align their techniques to OS level objects if that is not a good fit for the actual domain.
|
|
| |
Linux is not a monolithic system. It has a monolithic (sorta) kernel. There's a big difference. You're arguing monolithic kernels vs. microkernels. Microkernels didn't even exist when UNIX was invented, so no, they are not representative of "Unix philosophy". "Unix philosophy" is merely about the user-space tools you use to do stuff, since back in 1970 all they had was the shell (sh), and various tools like grep, ed, awk, etc., to do things.It's entirely possible to have a microkernel with a Unix-like system; HURD attempts this. Microkernel vs. monolithic is an entirely separate issue.
|
|
| |
I think the parent poster is talking about the userspace. A Linux distribution like Ubuntu that uses systemd, GNOME, and NetworkManager is certainly monolithic compared to Slackware circa 1999. NetworkManager alone is a pile of garbage that goes completely against Unix networking conventions, for IMO no good reasons. OpenBSD's approach to integrating WiFi and other network interfaces into the existing BSD network management commands is so much better.
|
|