The point is both Unix and Windows faced the exact same problem: needing to support larger characters. MS thought little and unleashed their army of coders to do something foolish. The Unix guys thought hard and did something elegant and sensible.
NT was started around 88 and released in mid 93. UTF-8 wasn't ready until early 93. It's too bad they couldn't go back in time to retrofit everything.
Programming is a series of tradeoffs. Unless you are in the middle of doing it, you don't understand the pressure the programmers were facing and the tradeoffs needed to be made. The Windows and Unix guys didn't face the same problem since they were different problems with different tools available in different time periods. Hindsight is 20/20 and is easy to be dickish to laugh at their mistakes afterward.
A huge amount of Unix software uses UTF-16 just like Windows (including Java). You're just being deliberately ignorant of the history. UTF-8 didn't exist and UCS-2/UTF-16 was the standard. One could argue that the Unicode Consortium screwed up assuming that Basic Multilingual Plane would be enough characters for everyone.
I am not ignorant of the history. Please try to follow this reasoning:
1. Faced with a new character set which was larger than eight-bits (Unicode 16-bit) Microsoft said "hey let's make an all-new API" and set to work rewriting everything
2. Faced with a new character set which was larger than eight-bits (Unicode 32-bit), the Unix guys said "hey let's create a standard way to encode these characters and rewrite nothing.
You seem to be fixated on the difference between the new character sizes. Ignore the precise number of bits! The point is when making a change to adapt to a new system, do you rewrite everything and risk causing bugs everywhere, or do you do something clever which has far less risk and uses the same API?
#2 massively ignores the fact they didn't bother to solve the problem until much much later. In fact, everyone else solved it the same way as Microsoft before then even on Unix. Actually, a bunch of Unix guys were involved in the design of UCS-2 and UTF-16 so I'm not sure why it's Microsoft's fault.
But yes, some Unix guys eventually faced with a bigger problem, significantly more time, and a design already started by the Unicode Consortium eventually solved it better. But that's not really much of an argument.
Also arguing that there is no risk going to UTF-8 is ridiculous. Anything that treats UTF-8 as ASCII, as you suggest, is going to do it wrong in some way. At least making a new API forced developers to think about the issue.
They didn't exactly face this problem. Linux kernel actually mostly has no idea of any kind of unicode or encoding except two places: character console code and windows-originated unicode based filesystems. It's interesting to note that NTFS in windows kernel implements it's own case folding mechanism for unicode and that this is also probably only significant place where windows kernel has to care about unicode.