It's smaller, but not actually negligible. An old but oft-cited 2002 study said a laptop with a three year lifecycle took about twice the energy to manufacture as operate. Silicon is much more energy intensive than other materials, with fab energy consumption apparently relatively steady over time on the order of 1 kwh/cm^2 of silicon processed.
So, there is a high bar for replacing a computer with a new one to actually save net resources, but an actual 10x reduction in power consumption like replacing a P4 desktop with an RPi is big enough to pay off reasonably quickly.
I am even more interested in how the formula works out if you buy one professional laptop (say, Lenovo X13) with a Ryzen CPU -- because they consume less power -- and I am looking forward to reading such an analysis sometime in the future.
Do you mean "seL4 is great"? I agree it won't do much for application-level security without adding some formally verified code on top (perhaps as simple as setting up isolation between VMs), but it looks great if you do want to use formal methods. For the simplest thing, just starting out with a formal semantics of the OS and reason to trust that semantics would save a lot of work (of course, a lot may remain).
Why do you think D-wave machines can be clustered at all? Unless you say it's operation is not essentially quantum, that would mean demonstrating coherence between a bunch of machines and a scalable quantum network!
I'm sceptical that you've ever actually tried to get any data. The first sentence of NASA's GISTEMP page tells you where they get there data, and a few more clicks will get you to daily logs for a decent chunks of it.
Sparkie also doesn't understand that raw data, especially over long timescales where collecting methods have changed, is messy stuff. It has to be cleaned up to be useful, and the people who are best qualified to do that are, gosh, climate scientists. Temperature dataset papers are routinely published and any odd assumptions challenged by other people knowledgeable in the field.
In short, demanding "raw data" is a shining example of the Dunning-Kruger effect.
I disagree extensively - raw data should definitely be available, and there are a number of interesting things you can do with it without any special expertise. For one, evaluating whether those adjustments even affect any overall conclusions you are concerned about, like this:
The response to the client is only sent after a majority of followers have the log entry. That's described in the text in the "Protocol Overview", and nicely animated in "Log Replication".
Yes, they have the log entry, but it can still be rolled back, can't it? Here's how I understand the process:
1. Client sends log entry to leader
2. Leader appends log entry, forwards it to followers
3. Majority of followers confirm
4. Leader commits the log entry
5. Leader confirms the commit to the client
6. Followers commit on the next heartbeat
What happens if the leader goes away between 5 and 6? To my eyes, it looks like the followers will time out, elect a new leader, and have to roll back the last log entry.
If an entry has been replicated to a majority of followers, then the new leader is guaranteed to have that entry and therefore it won't be rolled back.
That is correct. The solution to this is given in section 5.4.1 (election restriction), section 5.4.2 (Committing entries from previous terms) and section 8 (Client interaction) of the Raft paper.
Roughly, a newly elected leader will have all committed entries (guaranteed by the "election restriction", 5.4.2) but it does not know precisely which are committed. The new leader will commit a no-op log entry (section 8) and after it has received replies from a majority of the cluster it will know which entries have already been committed.
Non-Turing-complete is not a bad way to go. You pretty much have to already be a researcher in dependent type systems (or maybe set theory) to invent functions that always terminate but can't be written in non-Turing-complete languages like Coq (an evaluator for programs in an at-least-as-powerful dependently typed language is the only remotely natural example I know of).
Also, writing a program that proves some programs terminate is way easier than proving a program that correctly proves any terminating program terminates, if you are confusing the two. If it's not too common, "I didn't manage to prove this terminates" sounds like a reasonable compiler error.
It can be kind of hard to satisfy termination checkers, though. They're not smart. You basically have to show structural induction on something which sometimes forces you to invent lots of new proof terms.
So, there is a high bar for replacing a computer with a new one to actually save net resources, but an actual 10x reduction in power consumption like replacing a P4 desktop with an RPi is big enough to pay off reasonably quickly.