1. It says it is $8/month, which is not mentioned on the github page, so I had been thinking it was free in addition to being AGPL-3.0; it links to https://snapify.it/ which is where I see the fee.
2. It says "for everyone" but looks like it might be Linux-specific, and it doesn't say anything about which OSes are supported.
IIUC, the fee is just to use their instance, and hosting your own instance is actually free. Also, it looks like the client side of it runs in a browser, so it will support pretty much any OS.
That's a weird way of saying "lack of competition". As others have mentioned, why should Epic Games bother supporting Linux?
Considering that I'm gaming on Linux, the number of competitors is pretty small and close to zero, I'm not sure why I should be forced to switch operating systems to support the "better platform".
I say this as someone who's been running Vortex/Skyrim modding on Linux years before there was official support for it and I'm kind of shocked honestly to hear that people are cheering for something I did so long ago (5 years to be precise) I hardly remember the time doing it.
All of the concepts SOMA explored were already familiar to me, but the experience of exploring the through the game was so much stronger than reading about them in a text book. Such a strong, lasting effect, I wish I could play it again for the first time.
Yeah I want to know how many people are using AI for social purposes; to provide the role of a friend. But I don’t know what category that would be under.
When you use a FOSS product more, the person that wrote the code doesn't end up spending more money. When you use a free service more, someone is paying for that usage and resources.
That's not remotely the same? A default setting that can easily be changed for a feature the vendor didn't have a solution for?
To give you an example. Try to use Google Search without sending your data to Google. You cannot use the product without it, you cannot opt out. Firefox, you can use just fine with Google not being your search engine.
Why isn't it the same? The fact that it is possible to change that default means google simply pays less for it than they otherwise would if it wasn't changeable.
It's not a binary toggle - firefox is selling you as a source of revenue for themselves. They're just not making it as extreme as it is possible to be - in the hopes that you don't switch away.
You can compare same situation with safari in iOS. Except google pays a lot more, since you cannot switch away in iOS as easily, and culturally there's more reluctance compared to firefox users. This makes google pay more for iOS traffic, as those users are worth more.
The problem is that this is equivocating between "selling your data" and setting Google as the default search. The former implies Firefox is harvesting your telemetry and personally identifying you and selling it off to the highest bidder. The latter is setting Google search as an optional default, where any telemetry is part of customary interactions with Google search rather than anything specific that Firefox is doing.
The sense in which you are the product on Firefox is that they want to maintain a large enough user base that search licensing is valuable enough to sell to Google.
> Why isn't it the same? The fact that it is possible to change that default means google simply pays less for it than they otherwise would if it wasn't changeable.
Because I can change that default and still use the thing. That's how it's very different.
Typically when people say that when something is free you are the product. They mean that it's free because your data is being sold, implying that without telemetry it wouldn't be free. That's not the case here as far as I know
Google is paying Mozilla to be the default search engine. Google is only paying Mozilla because Firefox has users, regardless if they use the default search engine or not. So, indirectly everyone is the 'product'.
I'm sure if 95% of people did swap to ddg, then google may change their mind.
Also I believe there is the possibility Google also pays Mozilla to offer competition so Chrome isn't considered a monopoly (but maybe Edge has changed that to some extent?)
Don’t they buy the search bar to have another competitor and not get forced to give away chrome for antitrust reasons? I don’t think they care about the search bar THAT much, it’s basically a donation right?
But then I’m not the product? The government is basically forcing google to pay my browser developer, how does that make me the product it is bad for me?
You are still "the product" even if google derives secondary benefits - because you are using firefox. Google doesn't pay the other forks of firefox money (at least, as far as i know). It's because you aren't using those browsers (you as in the royal you).
I didn't say you being a product is bad - but it does not align customer with software company. You may be OK with being sold as a product to google, as this relationship currently isn't damaging. But what if a future offer which would damage you is taken by mozilla because it's profitable?
The quote refers to a Faustian bargain offered by the Penn's. They'd bankroll securing a township, as long as the township gave up the ability to tax them. The quote points out that by giving up the liberty to tax, for short term protection, ultimately the township would end up having neither the freedom to tax to fund further defense, or long term security so might as well hold onto the ability to tax and just figure out the security issue.
Moral: don't give up freedoms for temporary gains. It never balances out in the end.
It's become more a shorthand for saying much more. Though the original context differs from how it is used today (common with many idioms).
People do not generally believe a seat belt limits your liberty, but you're not exactly wrong either. But maybe in order to understand what they mean it's better to not play devil's advocate. So try an example like the NSA's mass surveillance. This was instituted under the pretext of keeping Americans safe. It was a temporary liberty people were willing to sacrifice for safety. But not only did find the pretext was wrong (no WMDs were found...) but we never were returned that liberty either, now were we?
That's the meaning. Or what people use it to mean. But if you try to tear down any saying it's not going to be hard to. Natural languages utility isn't in their precision, it's their flexibility. If you want precision, well I for one am not going to take all the time necessary to write this in a formal language like math and I'd doubt you'd have the patience for it either (who would?). So let's operate in good faith instead. It's far more convenient and far less taxing
1) Do they really? I honestly don't know, are there independent polls about this?
2) What makes them think they have any right to decide for other people's children? I would be OK with them genuinely thinking they need to surveil their own children but if 1) is true then there is this underlying need of people to control others and I am not OK with that. This is how minorities are suppressed and harassed - same mechanism, different target.
> While the instructions for authors for Paediatrics & Child Health has at times indicated the case reports are fictional, that disclosure has never appeared on the journal articles themselves.
Sounds like they were asking authors for fiction, so probably plenty of them are.
They asked the authors for fiction “at times”. Meaning that some are fiction, and some very well might not be. The best they can do is try to contact the authors and see if the case report they wrote is fictional or not. The second best is to admit that they made a mess and say “the case reports might or might not be fictional, we have no way of knowing”.
I suspect you're reading too much into that phrase. It seems more likely to me that the reporter here contacted one or more of the case report authors directly to ask for a copy of what instructions they received from the journal at the time. (This would be good journalistic practice, rather than just take the journal's word for it, when they might have an incentive to lie.) But they obviously couldn't explicitly confirm that every single author received similar instructions, so they used the “at times” phrase to cover their ass.
If they had direct evidence that some author's instructions failed to ask for the case study to be fictionalized, I think they would have specifically said that. It's more definitive, and catches the journal in a lie.
I'm pretty sure what happened here is that:
1) The journal always asked for and thought they received fictionalized case studies.
2) It never occurred to them that they were presenting the case studies in a way that could be misinterpreted. (This is indefensible negligence, but I also understand how it could have happened "innocently".)
3) Once the issue came to light, they issues blanket corrections to every case study study to describe them as fiction because they asked for fiction and edited them all as fiction. (I.e., Didn't do any fact checking or independent confirmation, beyond medical broad strokes.)
4) At least one author didn't read the instructions carefully enough and sent in a real case study, which as the article says, wasn't caught by the editors during the review process. (And really, how would they catch it? If they thought they asked for fiction, they wouldn't be fact checking it.)
I actually think the disclaimer may be appropriate, even on the article that was written as a true story, if it wasn't reviewed as one.
> If they had direct evidence that some author's instructions failed to ask for the case study to be fictionalized, I think they would have specifically said that.
Which they do. They specifically say that. “Neither the instructions for authors from 2010 — when Koren and his coauthor Michael Rieder would have written their article — nor the linked list of article types — state the cases are fictionalized, or fictional.”
“An archived version from September stated, ‘Each highlight is a teaching tool that presents a short clinical example, from one of the studies or one-time surveys,’ with no mention of fiction.”
These are direct quotes from the article. The exact kind you are asking for. With inline links to the archived documents. And yes it is very definitive.
> I'm pretty sure what happened here is that:
No need to speculate. Just read the article.
> 1) The journal always asked for […] fictionalized case studies.
Possibly, but the big companies have ratcheting expectations to meet, and prefer to keep benefits to themselves, while leaving us with the drawbacks. e.g. Tesla using telemetry to protect itself but not customers without court order.
You've misread or I was not clear enough. I advocate rejecting this system—one must understand the boundaries in order to do that. Saying, "I won't bother" is the opposite of that.
It's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt.
But is it a security issue on copilot that the user explicitly giving AI permission and instructed it to curl a url?
Regardless of the coding agent, I suspect eventually all of the coding agents will behave the same with enough prompting regardless if it's a curl command to a malicious or legitimate site.
The user didn't need to give it curl permission, that's the whole issue:
> Copilot also has an external URL access check that requires user approval when commands like curl, wget, or Copilot’s built-in web-fetch tool request access to external domains [1].
> This article demonstrates how attackers can craft malicious commands that go entirely undetected by the validator - executing immediately on the victim’s computer with no human-in-the-loop approval whatsoever.
I think there's different conversations happening and I don't think we're having the same conversation.
This is the claim by the article: "Vulnerabilities in the GitHub Copilot CLI expose users to the risk of arbitrary shell command execution via indirect prompt injection without any user approval"
But this is not true, the author gave explicit permission on copilot startup to trust and execute code in the folder.
Here's the exact starting screen on copilot:
│ Confirm folder trust │
│ │
│ ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │
│ │ /Users/me/Documents │ │
│ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ Copilot may read files in this folder. Reading untrusted files may lead Copilot to behave in unexpected ways. With your permission, Copilot may execute │
│ code or bash commands in this folder. Executing untrusted code is unsafe. │
│ │
│ Do you trust the files in this folder? │
│ │
│ 1. Yes │
│ 2. Yes, and remember this folder for future sessions │
│ 3. No (Esc) │
And `The injection is stored in a README file from the cloned repository, which is an untrusted codebase.`
"With your permission, Copilot may execute code or bash commands in this folder." could be interpreted either way I suppose, but the actual question is "do you trust the files in this folder" and not "do you trust Copilot to execute any bash commands it wants without further permissions prompts".
The risk isn't solely that there might be a prompt injection, Copilot could just discover `env sh` doesn't need a user prompt and just start using that spontaneously and bypassing user confirmation. If you haven't started Copilot in yolo mode that would be very surprising and risky.
If it usually asks for user confirmation before running bash commands then there should, ideally, not be a secret yolo mode that the agent can just start using without asking. That's obviously a bad idea!
"Actually copilot is always secretly in yolo mode, that's working as designed" seems like a pretty serious violation of expectations. Why even have any user confirmations at all?
If the user is working in a folder where copilot can discover a malicious `env sh` to run, the user should not give permission to trust the files in the folder.
I think it's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt. That is a valid limitation of LLM supported agentic workflows today.
But that's not what this article claims. The article claims that there was no user approval and no user interaction beyond initial query and that the copilot is downloading + executing malware.
I'm saying this is sensationalized and not a novel technical vulnerability write up.
The author explicitly gave approval for copilot to trust "untrusted repository". Crafted a file which had instructions to do a curl command despite the warnings on copilot start up. It is not operating secretly in yolo mode.
If the claim of the article is "Copilot doesn't gate tool calls with env", I'd have a different response. But I also have to mention, you can tune approved tool calls.
reply