Do you think I could just leave this part blank and it'd be okay? We're just going to replace the whole thing with a header image anyway, right?
You are not logged in.
XxAtillaxX wrote:If you think security through obscurity is never warranted, you're ignoring many services that heavily rely on it for their business.
Let's take Google for example, which keeps their market share by obscuring the excellent Google search algorithm, the YouTube algorithm to prevent manipulating trends and views.
It even protects forums like yours by providing ReCaptcha, which uses additional obscured functions that determine whether the activity is being done by a bot or a human, much like the goal of campaigns.I guess there would be some exceptions where methods not relying on obscurity are not possible, but when they are, what diff said is true
Also, I don't think using ReCaptcha is a very good idea, I'd guess that it would be waaaaayyy easier to solve with a computer than the previous Captchas
It's actually much more beyond pressing a checkbox and having a green check mark appear.
The captcha systems of the past have been largely deprecated in favour of newer and increasingly more interactive solutions for that reason.
It's very easy today to train an AI to recognize letters and digits, however recognizing a large series of objects requires a large data set, and more training time; more computation.
I've previously mentioned as well that the ReCaptcha system implements undocumented features, of which likely includes cursor movement tracking and (questionable, but effective) previously gathered tracking information.
Not every technology has the same security potential, a generalization like 'any system that relies on obscurity isn't good' isn't truly providing any constructive criticism, and is just lazy sound bites.
If you take for instance, a cryptographic algorithm, you can be reasonably assured of strength without obscurity, the algorithm can be known and remain secure, on the other hand a locking mechanism cannot.
If you know how to break a lock, then the system is compromised. You could make the argument that you should physically strengthen the system, however there's always going to be a single point of weakness physically.
It's impossible to make a secure anti-cheat system where the variables are known, because if you know exactly how the heuristics function, you can readily evade detection much more easily than if you hadn't at all.
*u stinky*
Offline
I've previously mentioned as well that the ReCaptcha system implements undocumented features, of which likely includes cursor movement tracking and (questionable, but effective) previously gathered tracking information.
My argument is that you dont need to actually solve anything, so whatever it gathers, you could fake. With the image ones, although you could technically train an AI to solve them, it would be MUCH harder, and there is no way to just repeat whatever was done before
It's impossible to make a secure anti-cheat system where the variables are known, because if you know exactly how the heuristics function, you can readily evade detection much more easily than if you hadn't at all.
What about having a copy of the physics engine running on the server? If you had a way to make sure that both had the same inputs, then you could easily check the movements of all the players (and even decrease the number of messages that need to be sent).
I guess this could add extra strain to the servers though, although I wouldnt think it would be too bad.
The best idea I can think of would be to have the server physics running half a second or so behind the clients, as it will almost definately take less time than that for messages to be sent / recieved. This would mean you could just cache the messages until the tick is reached (assuming they are valid until then), and then calculate everything exactly the same as in the clients.
Offline
XxAtillaxX wrote:I've previously mentioned as well that the ReCaptcha system implements undocumented features, of which likely includes cursor movement tracking and (questionable, but effective) previously gathered tracking information.
My argument is that you dont need to actually solve anything, so whatever it gathers, you could fake. With the image ones, although you could technically train an AI to solve them, it would be MUCH harder, and there is no way to just repeat whatever was done before
I don't follow what you're getting at.
XxAtillaxX wrote:It's impossible to make a secure anti-cheat system where the variables are known, because if you know exactly how the heuristics function, you can readily evade detection much more easily than if you hadn't at all.
What about having a copy of the physics engine running on the server? If you had a way to make sure that both had the same inputs, then you could easily check the movements of all the players (and even decrease the number of messages that need to be sent).
I guess this could add extra strain to the servers though, although I wouldnt think it would be too bad.The best idea I can think of would be to have the server physics running half a second or so behind the clients, as it will almost definately take less time than that for messages to be sent / recieved. This would mean you could just cache the messages until the tick is reached (assuming they are valid until then), and then calculate everything exactly the same as in the clients.
It's actually not hard to evade cheat detection given anyone can merely replay movements, by using macros or getting needlessly fancy with bots, in both cases it'd be a waste of time and effort emulating physics server-sided.
*u stinky*
Offline
destroyer123 wrote:▼XxAtillaxX wrote:My argument is that you dont need to actually solve anything, so whatever it gathers, you could fake. With the image ones, although you could technically train an AI to solve them, it would be MUCH harder, and there is no way to just repeat whatever was done before
I don't follow what you're getting at.
Im just thinking that if the computer doesnt actually need someone to solve anything (as they do in the image and text captchas), it would be easier to do it without a person at the computer, as its fairly easy to just have the computer do what it would normally do, just automatically instead of with someone inputting things, but its more difficult to have the computer replace the person when they actually need to think about things, as then you need to make some text / image recognition AI, which is much more complicated
destroyer123 wrote:▼XxAtillaxX wrote:What about having a copy of the physics engine running on the server? If you had a way to make sure that both had the same inputs, then you could easily check the movements of all the players (and even decrease the number of messages that need to be sent).
I guess this could add extra strain to the servers though, although I wouldnt think it would be too bad.The best idea I can think of would be to have the server physics running half a second or so behind the clients, as it will almost definately take less time than that for messages to be sent / recieved. This would mean you could just cache the messages until the tick is reached (assuming they are valid until then), and then calculate everything exactly the same as in the clients.
It's actually not hard to evade cheat detection given anyone can merely replay movements, by using macros or getting needlessly fancy with bots, in both cases it'd be a waste of time and effort emulating physics server-sided.
I guess your right about that, I was just thinking about the actual physics hacks, like god mode hacks, and forgot about the other ways to cheat. I guess the physics hacks are the most significant though, so it would still be good to prevent them, even if it doesnt prevent all cheating
Offline
Different55 wrote:Mieaz wrote:they already do, the real question is, can staff make anti-cheat and tell how it works but still make it unbreakable?
Any system that relies on obscurity to stay safe is not a good system to begin with.
I know you're trying to sound deep and intellectual but there's logical boundaries and exceptions to security techniques.
In this scenario the goal is to reasonably distinguish human activity from automated activity, which is entirely different from normal, less infallible security applications.Everybody Edits is (or was) a sandbox game, which means implementing inherently non-sandbox features like campaigns defeats the original sandbox concept.
I'm not advocating for this system at all, however I'm aware that if they openly expressed the techniques used, it'd significantly become easier to evade detection.If you think security through obscurity is never warranted, you're ignoring many services that heavily rely on it for their business.
Let's take Google for example, which keeps their market share by obscuring the excellent Google search algorithm, the YouTube algorithm to prevent manipulating trends and views.
It even protects forums like yours by providing ReCaptcha, which uses additional obscured functions that determine whether the activity is being done by a bot or a human, much like the goal of campaigns.
Besides ReCaptcha, none of your examples are even close to being relevant, neither of them are security related. And ReCaptcha has been broken. There are plenty of bots that get past it on these forums only to be stopped at the next one or two layers of filters that don't rely on "hey you can't get past it because I'm not telling you how to get past it." Security through obscurity is no security at all.
"Sometimes failing a leap of faith is better than inching forward"
- ShinsukeIto
Offline
i believe what we're starting a huge debate is about is that:
if a human can do it, a bot can too
( and maybe even better some times )
Offline
XxAtillaxX wrote:Different55 wrote:Mieaz wrote:they already do, the real question is, can staff make anti-cheat and tell how it works but still make it unbreakable?
Any system that relies on obscurity to stay safe is not a good system to begin with.
I know you're trying to sound deep and intellectual but there's logical boundaries and exceptions to security techniques.
In this scenario the goal is to reasonably distinguish human activity from automated activity, which is entirely different from normal, less infallible security applications.Everybody Edits is (or was) a sandbox game, which means implementing inherently non-sandbox features like campaigns defeats the original sandbox concept.
I'm not advocating for this system at all, however I'm aware that if they openly expressed the techniques used, it'd significantly become easier to evade detection.If you think security through obscurity is never warranted, you're ignoring many services that heavily rely on it for their business.
Let's take Google for example, which keeps their market share by obscuring the excellent Google search algorithm, the YouTube algorithm to prevent manipulating trends and views.
It even protects forums like yours by providing ReCaptcha, which uses additional obscured functions that determine whether the activity is being done by a bot or a human, much like the goal of campaigns.Besides ReCaptcha, none of your examples are even close to being relevant, neither of them are security related. And ReCaptcha has been broken. There are plenty of bots that get past it on these forums only to be stopped at the next one or two layers of filters that don't rely on "hey you can't get past it because I'm not telling you how to get past it." Security through obscurity is no security at all.
No, actually, they're underpaid humans in every instance I've seen. I've played with some sites that pay you to solve captcha for spammers.
Tell me why Steam doesn't give any details about how their VAC system works?
*u stinky*
Offline
Hey it's almost as if both complexity and obscurity are useful in creating effective security features.
Who would've thought it?
One bot to rule them all, one bot to find them. One bot to bring them all... and with this cliché blind them.
Offline
Different55 wrote:XxAtillaxX wrote:Different55 wrote:Mieaz wrote:they already do, the real question is, can staff make anti-cheat and tell how it works but still make it unbreakable?
Any system that relies on obscurity to stay safe is not a good system to begin with.
I know you're trying to sound deep and intellectual but there's logical boundaries and exceptions to security techniques.
In this scenario the goal is to reasonably distinguish human activity from automated activity, which is entirely different from normal, less infallible security applications.Everybody Edits is (or was) a sandbox game, which means implementing inherently non-sandbox features like campaigns defeats the original sandbox concept.
I'm not advocating for this system at all, however I'm aware that if they openly expressed the techniques used, it'd significantly become easier to evade detection.If you think security through obscurity is never warranted, you're ignoring many services that heavily rely on it for their business.
Let's take Google for example, which keeps their market share by obscuring the excellent Google search algorithm, the YouTube algorithm to prevent manipulating trends and views.
It even protects forums like yours by providing ReCaptcha, which uses additional obscured functions that determine whether the activity is being done by a bot or a human, much like the goal of campaigns.Besides ReCaptcha, none of your examples are even close to being relevant, neither of them are security related. And ReCaptcha has been broken. There are plenty of bots that get past it on these forums only to be stopped at the next one or two layers of filters that don't rely on "hey you can't get past it because I'm not telling you how to get past it." Security through obscurity is no security at all.
No, actually, they're underpaid humans in every instance I've seen. I've played with some sites that pay you to solve captcha for spammers.
Tell me why Steam doesn't give any details about how their VAC system works?
Because obscurity is a good layer only when combined with good practices everywhere else. If your solution relies on secrets to keep itself from falling to bits, there's no way what you have is a good solution.
"Sometimes failing a leap of faith is better than inching forward"
- ShinsukeIto
Offline
Because obscurity is a good layer only when combined with good practices everywhere else. If your solution relies on secrets to keep itself from falling to bits, there's no way what you have is a good solution.
I'm merely stating that there are applications (like game cheat detection) where obscurity is the only option.
Is there any way you could ensure heuristics in cheat detection systems are successful (to the same degree) without obscuring any techniques used?
*u stinky*
Offline
Different55 wrote:Because obscurity is a good layer only when combined with good practices everywhere else. If your solution relies on secrets to keep itself from falling to bits, there's no way what you have is a good solution.
I'm merely stating that there are applications (like game cheat detection) where obscurity is the only option.
Is there any way you could ensure heuristics in cheat detection systems are successful (to the same degree) without obscuring any techniques used?
I think the best approach would just be to prevent the worst game breaking hacks (the physics ones), and to just leave the other ones alone, as they cant ever be completely fixed (as you say, the only way would be to have some complicated check that relies on people not knowing what its checking for, which as different says isnt good practice)
This would prevent all the problems where players can get to places they shouldnt be able to, and break things like switch things / keys, get actual god mode, or just win in a few seconds, which IMO are the most important to fix.
Offline
You are all looking way to deep into this.
I've seen people complete it in seconds.
Offline
You are all looking way to deep into this.
I've seen people complete it in seconds.
They completed them earlier and anticheat doesn't work for them anymore
Offline
Emalton wrote:You are all looking way to deep into this.
I've seen people complete it in seconds.
They completed them earlier and anticheat doesn't work for them anymore
That isn't true for everyone.
Offline
Gosha wrote:Emalton wrote:You are all looking way to deep into this.
I've seen people complete it in seconds.
They completed them earlier and anticheat doesn't work for them anymore
That isn't true for everyone.
Actually, no. It is true for everybody.
If you try to beat any campaign, is it whatever it is, beating it under 30 seconds (if tutorial) or under a minute (if not tutorial) will result in instant cheating detection.
Offline
if I'm not wrong, what I heard is right, then that's it:
*You will only be kicked when touching the trophy, and cheat detection worked.
if you move over a block without godmode access, (like basic) then you will be kicked.
Less Time then required minimal. Time counted as movement, not staying for an hour on the same place...
Less Coins then required minimal. Ex Crew Odyseus - it's 1!
So perhaps the answer to OP's question is:
How does the campaign cheat detection work?
It doesn't.
One bot to rule them all, one bot to find them. One bot to bring them all... and with this cliché blind them.
Offline
if I'm not wrong, what I heard is right, then that's it:
*You will only be kicked when touching the trophy, and cheat detection worked.
if you move over a block without godmode access, (like basic) then you will be kicked.
Less Time then required minimal. Time counted as movement, not staying for an hour on the same place...
Less Coins then required minimal. Ex Crew Odyseus - it's 1!
some cmapaigns dotn need them like ee mountain
thanks hg for making this much better and ty for my avatar aswell
Offline
<snip>
If you try to beat any campaign, is it whatever it is, beating it under 30 seconds (if tutorial) or under a minute (if not tutorial) will result in instant cheating detection.
That's false unless if that was recently added.
Offline
Asking the ppl who cheated ppf is the best way to find the answer to this question *gh*
Offline
[ Started around 1732548225.3033 - Generated in 0.135 seconds, 13 queries executed - Memory usage: 1.84 MiB (Peak: 2.14 MiB) ]