Talk:Singularity (operating system)

From Wikipedia, the free encyclopedia

This article is within the scope of Computing WikiProject, an attempt to build a comprehensive and detailed guide to computers and computing. If you would like to participate, you can edit the article attached to this page, or visit the project page, where you can join the project and/or contribute to the discussion.
??? This article has not yet received a rating on the quality scale.
??? This article has not yet received an rating on the importance scale.
This article is part of the .NET WikiProject, an attempt to improve and organize the .NET content on Wikipedia. If you would like to participate, you can edit the article attached to this page, or visit the project page, where you can join the project and/or contribute to the discussion.
Start This article has been rated as start-Class on the quality scale.
??? This article has not yet been assigned a rating on the importance scale.

Contents

[edit] singularity

The concepts of "unsafe" and "safe mode" should be explained in this article since they're obviously not very common terms. -- intgr 16:48, 20 May 2006 (UTC)

Linked each to type safety to provide context. Vesta 03:06, 26 May 2006 (UTC)
I'm sorry, but I took those out because I don't think it has anything to do with type safety. C# is always a type safe language, for example. This seem to be talking of a "mode" rather than a programming language concept. Are you really sure this was about type safety? I first thought it spoke of unsafe code, but the sentence still doesn't make much sense to me even if that would be the case. — Northgrove 02:01, 6 March 2008 (UTC)

Also, does this operating system assume that the virtual machine executing the code is 100% bulletproof? Since it seems that you can circumvent any restrictions once you manage to gain control of the virtual machine, and thus also control the entire operating system as you see fit. -- intgr 16:48, 20 May 2006 (UTC)

I don't see what you're getting at. Microsoft uses a JIT compiler rather than a virtual machine. Vesta 03:06, 26 May 2006 (UTC)
I think they're called virtual machines whether they compile the bytecode into native code or interpret it, but I guess that is debatable. Either way, JIT compilers are not necessarily completely secure. The optimizations performed by the compiler can create plenty of loopholes that can be used to trick the compiler into executing malformed or custom-crafted machine code. So I'm asking if hypothetically one of such bugs could be used to gain control of the entire virtual machine. -- intgr 17:47, 26 May 2006 (UTC)
Such a bug can probably be exploited to gain control of the entire machine, not just the VM. Let's hope that the developers build the compiler correctly. :) -- Vesta 02:11, 28 May 2006 (UTC)
In one of their videos they mentioned this. They plan on moving from "trusting" Bartok to compiling to some kind of typed assembly language. That way the actual byte code can be verified. -- Jmacdonagh 02:00, 24 October 2006 (UTC)
In fact it could be theoricaly possible to create a prooved implementation. However such a task is pretty complex. A Typed Assembly Language such as TALx86 could indeed be used to detect some compilation bugs. It is important to note that even with Typed Assembly Language, security holes are still possible. Nevertheless such an architecture would result in far more security holes than typical 'modern' OS.--debackerl 18:18, 29 October 2006 (UTC)

Shouldn't we mention JNode here? it is more or less the same thing, but in java. —Preceding unsigned comment added by 57.79.167.12 (talk • contribs)

[edit] Overly Technical

I added the overly technical box since an average user would have no idea what's going on in this article. Nrbelex (talk) 21:30, 4 October 2006 (UTC)

[edit] OSs

Whats with the os's on the right: how can this have anything to do with DOS if the priject started in 2003? —Preceding unsigned comment added by 86.29.53.244 (talk) 08:42, 22 October 2007 (UTC)

[edit] So why doesn't the CPU do this?

"Instead, there is only a single address space in which "Software-Isolated Processes" (SIP) reside. Each SIP has its own data and code layout, and is independent from other SIPs. These SIPs behave like normal processes, but do not require the overhead penalty of task-switches. Protection in this system is provided by a set of invariants"

Ok, I get the idea here. But if this results in real-world performance benefits due to the lack of task switching overhead, doesn't this suggest that the CPU could be extended to provide the same sort of invariants and thereby offer the same benefit to all OS's?

Maury (talk) 14:47, 6 March 2008 (UTC)

The invariants it is referring to here are the assurances the OS makes that a program will not access memory that it does not own. The OS can make this assurance when the program is installed and it analyzes the MSIL code. The performance benefits are realized because the safety checking is done only once, at install time, instead of at runtime by the CPU. —Preceding unsigned comment added by 207.13.122.233 (talk) 16:17, 10 March 2008 (UTC)
Sure, but what I'm thinking is that the compiler could export this into CPU primitives driving the MMU, avoiding doing any of this "in code" at all. Modern CPUs are already trying to handle this invisibly through hyper-threading and such, it seems that making this explicit would be a great improvement. I can't help thinking about the Transputer's approach. Maury (talk) 12:22, 13 March 2008 (UTC)
I think you are confusing what an invariant is. It is not some operation that is done on the processes or applications. Rather it is a set of guarantees - the OS guarantees you that no matter what the installed applications do, the effects of memory protection (one app cannot modify the memory of another and so on) will hold true.
The OS can guarantee this because of the architecture of the Operating System. All applications are written using type-safe and verifiable language (like C#) which does away with unsafe stuff like pointers and unchecked buffers and invalid casts. Such code is further subjected to static analysis at compile time/and or installation. During this analysis, it can be checked whether the application does anything that might violate the invariants. The OS, during installation, checks the applications. If it does not pass the test, it is not installed. Consequently, if it gets installed, it is guaranteed not to violate memory protection requirements. The check is done at install-time, not at run-time.
How many times do you install a software as compared to how many times is it run? As such, things that happen at run time are a much better candidate for hardware acceleration. Also, different OSs might do this stuff in different ways and a CPU has to be as generic as possible. As such there is very little incentive at the CPU level to incorporate things that are done by a single OS only. --soum talk 12:41, 13 March 2008 (UTC)