You can finance the purchase to avoid upfront payments. And in many cases, the energy savings exceed the finance payments, resulting in a net monthly gain from a cash flow basis with no upfront payment.
>> You can finance the purchase to avoid upfront payments.
Yeah and then you pay interest on the loan. Which then makes it EVEN MORE expensive AND lengthens your ROI just to break even.
FYI you're not "saving" anything until A) Your loan is paid off and B) Your array is generating enough energy to compensate for your existing energy use.
Your numbers just don't add up:
$15,000 for a 5Kw array.
$15,000 loan with a VERY generous 5% interest rate on an also very generous 6 year term.
Interest paid over six years: $4,500
Total paid after six years: $19,500
Monthly payments would be around $240.00
The average monthly cost of electricity for Minneapolis is about $190.00 which is about 1,097kWh
The monthly average your 5kWh array can generate in a month (assuming optimal conditions) is around 700kWh. Leaving you with a deficit of 397kWh you still need to pay for.
So no, I'm not seeing how the cost savings will exceed your finance payments. It will eventually pay for itself once you get outside of that ROI period. And then what? You get 15 years of free electricity which amounts to:
$190.00 * 12 = $2,280
$2,280 * 15 = $34,200
So then, over 25 years, your net gain is about: $14,000? Which is about $560/year?
Every single year I talk to companies and the cost has gone down, barely. I've been told every year for the last 20 years that technology is getting so much better. The panels are so much more efficient, cost less, the state and federal govt have tax breaks, blah blah, blah. The Chinese have found a way to produce them this way and that way, "Oh you just wait, its really going to be affordable in the next few years!"
No, its still not affordable. If it were, like OP said you would see them on every house in your neighborhood.
I've wanted to put solar on my house for very long time and every year its the same thing. "Finance a $15,000 loan and in ten years you'll have free electricity!!"
I would say anybody who's rational, informed and interested looking at that would 100% of the time its not worth it.
Solar module prices over the past 20 years went from $2/watt to $.3/watt. Installed prices are a little more, going from maybe $6/watt to ~ $3/watt. I paid about $3.5/watt installed in Berkeley 3 years ago, a place not known for affordability. In the meantime, (average in the US) electricity rates went from ~$.10/kW to about $.16/kW. If over that period, your payback period has remained constant, you must have dramatically (like 20x ) reduced your power usage. Congratulations!
I'm the author of the article. Yes, I agree. I think AI reviewers, in their current state, are essentially glorified linters. Much of what they excel at can already be achieved with linting. However, I believe their edge lies in spotting semantic mistakes, whereas linters are ideally suited for syntactic or stylistic issues.
It's always exciting to see new approaches to code reviews - GitHub has its strengths, but it’s far from perfect.
For the scenario you’ve outlined, have you thought about splitting the 3 patches into separate, dependent pull requests? While GitHub doesn’t natively support this, the right code review tool (shameless plug - I’m part of a team building one called GitContext) should allow you to keep pull requests small while maintaining dependencies between them. For example, patch 3 can depend on patch 2, which in turn depends on patch 1. The dependency tracking between them - provided by the code review tool - can ensure everything is released in unison if that's required.
Each patch can then be reviewed on its own, making feedback more targeted and easier to respond to. You can even squash commits within a pull request, ensuring a clean commit history with messages that accurately reflect the individual changes. Better still, with the right tool, you can use AI to summarize your pull request and review, streamlining the creation of accurate commit messages without all the manual effort.
A good code review tool also won’t get bogged down by git operations like rebases, merges, or force pushes. Reviewers should always see only the changes since their last review, no matter how many crazy git operations happen behind the scenes. That way, you avoid having to re-review large diffs and can focus on what’s new. The review history stays clean, separate from the commit history.
I'd be curious if this approach to splitting up pull requests and tracking their inter-dependencies would address your needs?
> It's always exciting to see new approaches to code reviews - GitHub has its strengths, but it’s far from perfect.
This is nice sentiment, it's positive reception to an idea and polite to the incumbent.
But it's so thoroughly not a new idea. It's literally the workflow git was designed to support, and is core to many long-standing criticisms about GitHub's approach for as long as GitHub has had pull requests.
And I'm over here wondering why this idea took *checks calendar* over 15 years to graduate from the denigrated mailing list degens and into hip trendy development circles.
I thought we were knowingly choosing shit workflows because we had to support the long-standing refusal by so many software devs to properly learn one of their most-used tools. That's why I chose the tools I chose, and built the workflows I built, when I migrated a company to git. Nobody gets fired for buying IBM after all.
I mean, the answer is simple. Even if email-based flows use range-diff, which is the correct conceptual model, all the actual details of using email are, I would estimate, about 1,000x shittier in 2024 than using GitHub in 2008 when I signed up for the beta as user #3000-something.
Email flows fucking suck ass. Yes I have used them. No, I won't budge on this, and no, I'm not going to go proselytize on LKML or Sourcehut or whatever about it, in Rome I'll just do as the Romans even if I think it sucks. But I've used every strategy you can think of to submit patches, and I can't really blame anyone for not wading through 500 gallons of horrendous bullshit known as the mailing list experience in order to glean the important things from it (like range-diff), even if I'm willing to do it because I have high pain tolerance or am a hired gun for someone's project.
Also, to be fair, Gerrit was released in 2009, and as the creator of ReviewBoard (in this thread!) also noted it supports interdiffs, and supported them for multiple version control backends, was released in 2006! This was not a totally foreign concept, it's just that for reasons, GitHub won, and the defaults chosen by the most popular software forge in history tend to have downstream cultural consequences, both good and bad, as you note.
> all the actual details of using email are, I would estimate, about 1,000x shittier in 2024 than using GitHub in 2008
Disagree about "all". Tracking patches in need of review is better done in a good MUA than on github. I can suspend a review mid-series, and continue with the next patch two days later. Writing comments as manually numbered, plaintext paragraphs, inserted at the right locations of the original patch is also lightyears better than the clunky github interface. For one, github doesn't even let you attach comments to commit message lines. For another, github's data model ties comments to lines of the cumulative diff, not to lines of specific patches. This is incredibly annoying, it can cause your comment for patch X to show up under patch Y, just because patch Y includes context from patch X.
Edited to add: github also has no support for git-notes. git-notes is essential for maintaining patch-level changelogs between rebases. Those patch-level changelogs are super helpful to reviewers. The command line git utilities, such as git-format-patch, git-rebase, git-range-diff, all support git-notes.
Dude, I'm not making a defense of mailing list workflows here. I'm just pondering the nature of the world where despite all the yapping about git I've seen floating around on the internet for as long as I've been lurking social media, the yappers are just recently keying in on something.
If you're asking "Why did this take 15 years for people to understand" and my reply is "Because it was under 1000 layers of other bullshit", then that's the answer to your pontification. It has nothing to do with whether you think email is good or not. You pondered, I answered. That simple.
> Because it was under 1000 layers of other bullshit
Not only because of that.
git-range-diff, while absolutely a killer feature, is a relatively new feature of git as well (a bit similarly to "git rebase --update-refs" -- which I've just learned of from you <https://news.ycombinator.com/item?id=41511241>, so thanks for that :)).
(FWIW, before git-range-diff was a thing, and also before I had learned about git-tbdiff, I had developed a silly little script for myself, for doing nearly the same. Several other people did the same for themselves, too. Incremental review was vital for most serious maintainers, so it was a no-brainer to run "git format-patch" on two versions of a series, and colordiff those. The same workflow is essential for comparing a backport to the original (upstream) version of the series. Of course my stupid little script couldn't recognize reorderings of patches, or a subject line rewrite while the patch body stayed mostly the same.)
Nope, none of it was knowingly done, and plenty of teams are almost trivially convertible to the normal workflow, even without inventing a buzzword like TFA did!
Though plenty aren't. I get it. (But one of the magic phrases that really works well is "this is what git, itself, does, and there's a man page installed on your system at this very moment explaining it")
As far as I know, splitting the series into individual PRs only works if you have commit rights to the repository, so you can base one PR on a different branch (in the main repository) than main.
As an outside contributor, with a fork of the repository, your three PRs will incrementally contain change A, A+B, and A+B+C, making the review of the last two PRs harder, because you need to review diffs for code you're already reviewed in another PR.
Not sure about the fork workflow but otherwise it is possible to change the base branch (manually on GH’s web interface) so that you don’t have to see the original branches and review the changes from A to B and from B to C. Maybe this is not possible with fork workflows?
What's the point of keeping track of commits? I honestly never understood people wanting that. Is this for some kind of weird accounting / social-score system where the number of commits decides your yearly bonus?
It's useful to see how the system evolved (because you might want to go back a bit and redo the newer stuff), but it's pointless to see the mistakes made along the way, for example, unless you have some administrative use for that.
Similarly, if a sequence of commits doesn't make sense as committed, but would make better sense if split into a different sequence: then I see no problem doing that. What's the point of keeping history in a bad shape? It's just harder to work with, if it's in a bad shape, but gives no practical advantages.
Not only do I think that's a pipe dream... I think it's technically impossible... I mean, diff has to show also what happened before whatever change took place. How do they expect not to see what was replaced? Or maybe I just don't understand what they mean by "changes since their last review".
Why not just do good old mergetrains with pullrequest A points to branch B amd then B points to master, merge B into master and thereafter point A back to master or am I missing the point?
This is called "stacked diffs" and it's a good workflow; the issue is that it's annoying to use on GitHub without tooling. The "point A back to master" bit isn't easy/obvious with pull requests.
From the peanut gallery of HN I’ve never understood Stacked Diffs. It looks like they reinvented commits as dependent PRs. Which are stacked on top of each other like commits are.
Fun fact: part of the reason that this article is on HN, I believe, is because I linked it to someone on another site as a means of explaining stacked diffs.
> It looks like they reinvented commits as dependent PRs.
Sort of kind of. It really depends on what you mean by PR: if we're talking about "review this branch please," which is what I would argue most mean by PRs, then yes, in the context of "stacked PRs for GitHub" it's largely about tooling that makes dependent PRs easier.
But there are other, non-GitHub tools. With those tools, you don't say "here's a branch, review it please," you say "here is a stack of commits, review them please." There's no branch going on inside. It's just a sequence of commits. This matters because it centers the commit as the unit of code review, not a branch. This also means that you can merge parts of the stack in at different times: to use the example from the article, once "small refactor" is good to go, it can be landed while "new API" is awaiting review. etc.
I think it takes actually using some of these tools to really "get it." I never understood them either, until I actually messed around with them. I am currently on my project solo, so I don't really stack at the moment, I think it really helps more the larger of a team you're working with is.
Oh hey, thanks for the explanation! I’ve been wondering about this for a while. The linked articles on HN tended to be heavy on arguing how unlike the workflow is to what “you are used to” that the description of what it was about got obfuscated.
I’ve used Git with email a little bit which also lets you review commits in isolation. It’s too bad that so many review tools bury the commits (ask me how many times someone at work has asked “what this is about” on the PR diff when the relevant commit message explains exactly that).
But what I like about email is that the whole series/PR also gets reviewed as a unit. Both worlds.
Shameless plug - I'm one of the creators of GitContext (https://gitcontext.com), a code review tool which has drawn much inspiration from Critique and others, and would love feedback from anyone who's interested in kicking the tires. We just launched in private alpha.
We're putting a maniacal focus into the user experience of code reviews, which we feel is overlooked by most tools. Many of the features of Critique that developers enjoy have been included in our first release...
- A focus on only the latest changes
- A familiar, side-by-side diffing interface - show 'diff from the last review' by default
- Tight integration with other tooling
- 'Action set' tracking - we allow you to pinpoint and assign line-level issues to relevant team members and track who's turn it is to act
- Satisfying gamification - plenty of buttons that go green and even some fun visual rewards for merging
Additionally, we've layered in...
- A beautiful, modern UX that provides light or dark mode
- Comments that never become outdated and reposition/evolve with the review
- Smart version tracking that handles rebases, merges, and force-pushes gracefully
- Progress tracking that allows you to see what each participant has left to complete down to the file revision level.
- A real focus on trying to get turn tracking right
We're just getting started and have a ton of ideas we can't wait to layer on. If anyone is up for giving it a try, we're actively seeking feedback. If you mention 'Hacker News' in the waitlist form we'll let you in right away.
Am I missing some of the features? The GUI does seem to be nicer, but functionally all I see thats added is support for non-blocking comments and explicit opt out of parts of the review. Maybe the visibility of what reviewers have already completed as well?
The "pick up where you left off" is already available but requires you to manually indicate file-by-file when you've completed your review. The personal Todo list is also very present.
GitHub's code review is pretty mediocre imo... just left Meta and I miss phabricator. I'm interested to see new stuff, hope I can get off the waitlist!
> This costs more than twice as much as GitHub, does it provide twice the value?
This is not always the right question to ask. One can argue that both products are too cheap with respect to the value they offer, so the relative value between the two is irrelevant.
In other words, if you can afford $X and $2X without even thinking, and if you think even $10X would be a fair value for either, it doesn't matter if the $2X product offers only 20% more value. You would simply want to get the best, even if it's a diminishing return. I believe $9/month/developer can be classified in this category if you are actually doing code reviews.
Fair question. We're aiming to provide enough value for the price we'll ultimate target. We aren't charging yet, but wanted to provide people with some of our rough thoughts on pricing since it's a common question. For now, it's free for folks who want to kick the tires.
I'll probably try it, but there's no way I can get our finance department to pay $900/mo for something we aren't sure if we're going to use. Maybe pricing it as "Free for the first five users, $9/mo/user afterwards" would be much better aligned with the customers' incentives.
Agreed, that's a tough pill to swallow. We envisioned trial period to let people make up there mind, but free for the first few users is another route. We'll noodle it when we loop back to pricing. Appreciate the feedback, it's helpful.
Sorry for the confusion. It's currently only offered as a web application and only works with GitHub. We are working to expand beyond these limitations based on customer needs / interest. I assume your interest is in a desktop application?
I was just wondering! I'm on Linux, so I wasn't sure on reading the web page whether it was something I'd be able to run, and the screenshots looked more desktop-app than web-app.
We use GitLab at work, so I wouldn't be able to use it there, but I use GitHub and sourcehut for some personal and open source stuff. Code review is one of the few things I don't do in emacs, so there remains room for other tools :)
One thing you could try is using a JetBrains IDE. They can do side-by-side diffs, static analysis in the diff view, you can edit directly from the diff viewer and of course you get full navigation and comprehension tools. When I left Google I spent some time trying to use GitHub's code review tools, but they are extremely basic. In recent years I found that with a custom git workflow I could use the IDE as a code review tool and it worked much better than any web based thing.
The trick is to use git commits as the review comments. As in, you actually add // FIXME comments inline on someone else's branch. They then add another commit on top to remove them. Once you're done, you can either squash and merge or just merge. This is nice because it avoids "nit" comments. If you dislike a name someone picked, you just go change it directly yourself and avoid a round-trip, so it reduces exhaustion and grants a notion of ownership to the reviewer.
If you need discussion that isn't going to result in code changes (about design for instance) you do it on the relevant ticket. This also has the advantage that managers who stay away from code review tools still feel like they understand progress.
It helps to use a specific way of working with git to do this, and to have it integrated into your other tools. I use gitolite combined with YouTrack and TeamCity to implement this workflow, along with a kernel-like ownership tree, but it works well enough at least in a small team (I never got to try it in a big team).
I tend to agree, because if Google doesn't, OpenAI et al will. The real question is how does Google balance this against the loss in advertising revenue?
Promoted links or native ad formats in AI search results would be an almost too obvious guess.
Then putting advertising in whatever format LLM outputs are dressed as.
My biggest fear is, abuse eg how LLMs for search can work with targeted ads, eg making you slightly prefer Coke vs. Pepsi, or One Political Brain Parasite(tm) over another.
Thanks for replying on this thread, PC. I'm curious to get your take on how this plays out for customers with large average transaction values ($500+)?
Stripe's cost to process and refund a payment, while not zero, is generally flat (card networks refund interchange fees, Stripe only has to cover the minimal cost of running the software to process the transaction, which is the same for all transaction sizes). Shouldn't the retained fees be flat and not a percentage?
Can't help but find it ironic the winner of the 2018 Levchin Prize for Advancements in Real-World Cryptography has an invalid SSL certificate on his research website.
A few hours ago, I went to their website after seeing the submission, and it worked. Now I'm getting SSL_ERROR_NO_CYPHER_OVERLAP (in Firefox 57). Maybe they're loadbalancing and one of the servers is incorrectly configured?
You can. Responsibility for their choice of business partners lies with them, not the public. Otherwise this blame-game-treasure-hunt-rigmarole never ends.
It doesn’t matter that much, but it’s a matter of principle.
It is on Akamai but it is on Akamai's non-TLS network (ie .egsuite.net)...you have to pay more for TLS on Akamai (.edgekey.net)....I'd blame IBM but not Akamai.
For reference, the previous commenter was talking about AL's 7 congressional districts. US congressional districts are defined based on population (they're suppose to be equally sized intra-state).