Re reduce binary size for embedded devices added by saen acro 4 months ago
Nobody seems be saying much about Rust, or if they are, the LtU search can't find it. So I'm starting a Rust topic. After some initial use of Mozilla's "Rust" language, a few comments.
I'm assumming here some general familiarity with the language. The ownership system, which is the big innovation, seems usable. The options are single ownership with compile-time checking, or multiple ownership with reference counts.
The latter is available in both plain and concurrency-locked form, and you're not allowed to share unlocked data across threads. This part of the language seems to work reasonably well. Some data structures, such as trees with backlinks, do not map well to this model.
There's a temptation to resort to the "unsafe" feature to bypass the single-use compile time checking system. I've seen this in some programs ported from other languages. It takes some advance planning to live within the single-use paradigm. Rust's type system is more troublesome. It's clever, well thought out, sound, and bulky. Rust has very powerful compile-time programming; there's a regular expression compiler that runs at compile time. I shudder to think of what things will be like once the Boost crowd discovers Rust.
These must be instantiated with the actual return type. As a result, a rather high percentage of functions in Rust seem to involve generics. There are some rather tortured functional programming forms used to handle errors, such as ". You get to pick. Or you can just use ". It's tempting to over-use that. There's a macro called "try! Such hidden returns are troubling. All lambdas are closures this may changeand closures are not plain functions. They can only be passed to functions which accept suitable generic parameters.
This is because the closure lifetime has to be decided at compile time. Rust has to do a lot of things in somewhat painful ways because the underlying memory model is quite simple.
This is one of those things which will confuse programmers coming from garbage-collected languages. Rust will catch their errors, and the compiler diagnostics are quite good. Rust may exceed the pain threshold of some programmers, though. Despite the claims in the Rust pre-alpha announcement of lanaguage definition stability, the language changes enough every week or so to break existing programs.
This is a classic Mozilla problem; that organization has a track record of deprecating old features in Firefox before the replacement new feature works properly. Rust deals with all of them, without introducing garbage collection or extensive run-time processing.
This is a significant advance. Is there a formalization arguing that the Rust type system or interesting fragments of it is sound? Of course, you can argue that issues like type soundness should not be "bolted on".
The Rust guys seem more concerned with pragmatism. I know that formalizing things is hard and that it may fail, but I'm surprised by the absence of visibility of any attempt at it -- I'm also surprised not to see more people ask this question. The Rust project does many things right it's extremely tricky to coordinate a joint effort between a tight group inside a structure and a larger volunteer base, for exampleformal semantics design seems from the outside to be one of its re reduce binary size for embedded devices added by saen acro 4 months ago weak points.
To be fair, Niko Matsakis and Aaron Turon and surely others that I haven't seen in action, I'm not very familiar with the Rust community have been evolving the type system in careful ways, and I suspect they have some good internal intuition of what should be sound and what isn't. There is still a lack of good example here. A related "good example" is the effort of Richard Eisenberg to maintain a formal description of the static and dynamic semantics of GHC Core Haskell.
It does not contain a soudness proof, but those have been established for language fragments in various publications. There was an attempt at formalizing a subset of Rustbut it is inactive.
If a formalization occurs, it is more likely to be the result of someone from the academic community than an internal effort from the core contributors. If formalizations had to come first, it would severely constrain the design space of the language to what was conveniently formalized even Haskell runs ahead of formalization a lot.
Another problem might be that their large volunteer base doesn't include theory-oriented PL people; I think C gets a lot of help from PL theory crowd in MSR in that regard. If you are volunteering, I'm sure they would welcome your efforts: Also, the typical role of formalization is postmortem: If formalizations had to come first, it would severely constrain the design space of the language to what was conveniently formalized.
I'm asking about the type system here, and I don't think it's absurd to restrict a type system to what you can formalize of course you'll experiment with new features, but one could hope the gap between an experiment and its proof arguments to be of the order of months, not years or decades. Of course, to formalize a type system you also need a proper specification of the dynamic semantics. But I find that most language whose dynamic semantics evolve in no-idea-how-to-formalize-yet territory are not statically typed in the first place.
Could you be more precise about what are you referring to here? I'm not very familiar with the internals re reduce binary size for embedded devices added by saen acro 4 months ago the GHC team I read GHC Weekly news and research papersbut the examples of type system changes I can recall have been formalized, generally before they were considered for inclusion in the language type-level literals by Adam Gundry, roles by Richard Eisenberg and Stephanie Weirich, and of course the various older work on GADT inference.
I rather find GHC to be exemplar in maintaining a strongly-typed intermediate language with strong soundness arguments, so I'm interested in your contrary point of view. I initially planned to but chose not to discuss the "people" aspect here, because going assumptions about whether someone specific could or should do this or that makes me uncomfortable.
Here is a paper circa http: Anyways, your language doesn't exactly fall down without an up to date formalization. Re reduce binary size for embedded devices added by saen acro 4 months ago aren't exactly easy: For an entirely new language with a new type system, there isn't a featherweight Rust yet. So why not years instead of months? And really, there is only a handful of PL theory people who are capable of and interested in doing this kind of work.
Subsequent papers describe a good part of Haskell, especially its type system Faxen,but there is no one document that describes the whole thing.
Certainly not because of a conscious choice by the Haskell Committee. Rather, it just never seemed to be the most urgent task. No one undertook the work, and in practice the language users and implementers seemed to manage perfectly well without it. Indeed, in practice the static semantics of Haskell i.
The consequences of not having a formal static semantics is perhaps a challenge for compiler writers, and sometimes results in small differences between different compilers. But for the user, once a program type-checks, there is little concern about the static semantics, and little need to re reduce binary size for embedded devices added by saen acro 4 months ago formally about it.
Nevertheless, we always found it a little hard to admit that a language as principled as Haskell aspires to be re reduce binary size for embedded devices added by saen acro 4 months ago no formal definition. But that is the fact of the matter, and it is not without its advantages.
In particular, the absence of a formal language definition does allow the language to evolve more easily, because the costs of producing fully formal specifications of any proposed change are heavy, and by themselves discourage changes.
It doesn't matter if programmers care. But as a client of one of their products, say, as a passenger on a plane, or connected to a live support machine, or trusting all my private life and riches to an encryption mechanism, I might care. This sub-subthread not the discussion with Sean, but raould's post and everything below it constantly has me asking myself whether I'm being trolled, and whether it is worth spending time participating in a discussion that seems purposeless.
I thought the benefit of formalism in principle and its cost, and thus the fact that a compromise exists had been established for decades -- that we didn't have to argue for it anymore. Some LtU participants argue that the specific formalisms that have been developped could be replaced with something better going in another direction eg.
John Shutt on type systemsand that's interesting. But it's quite different from suggesting that anyone would believe that "formalism is the worst thing that ever happened to programming", an idea I can't quite follow. There are relatively well designed languages and systems and poorly designed one, and I don't think formalization had much of a role to play in making the good ones good.
An intelligent designer who had experience in software engineering had that role. The programming languages I use have mostly not been formalized.
The ones that need formal descriptions, need them because they've gotten too complex to understand and frankly are possibly defective in that way. It's complex for no good reason and the templates are unreadable for no good reason. And it's so complicated that each little clarification that's been explored had to be explored after all of the remaining compilers for it failed a test and had to be revised And the reason there are so few compilers is that the language is too complicated to understand.
I don't want to figure out overly complex Haskell type-systems, I don't see the reason for such excessively meticulous typing when all I want is a program that does the job. You don't re reduce binary size for embedded devices added by saen acro 4 months ago something when it gets too complex. You formalise something too avoid it getting too complex. Even "intelligent" designers tend to gloss over a lot of hidden complexity that only bites back later, once the system grows.
Formalisation helps tremendously to avoid complexity because it forces the designer to realise, and stay honest about, all that complexity. Thus, design and formalisation work best when they go hand in hand. Which isn't to say that formalisation can't get in re reduce binary size for embedded devices added by saen acro 4 months ago way, but there is a value to it. Implementation also works to keep things simple, but formalizations are quite similar to implementations in a differently constrained language.
I'm not aware of many PL designers that use formalization hand in hand with design; they do exist, especially when focus is on safety and verification. It'd be nice to think we're in the Middle Stone Age.
We may be in the Lower.