Follow us on Twitter!
Society leans ever heavily on computers, if you have the power to take out computers you can take out society. - cubeman372
Friday, April 25, 2014
Navigation
Home
HellBoundHackers Main:
HellBoundHackers Find:
HellBoundHackers Information:
Learn
Communicate
Submit
Shop
Challenges
HellBoundHackers Exploit:
HellBoundHackers Programming:
HellBoundHackers Think:
HellBoundHackers Track:
HellBoundHackers Patch:
HellBoundHackers Other:
HellBoundHackers Need Help?
Other
Members Online
Total Online: 20
Guests Online: 18
Members Online: 2

Registered Members: 82906
Newest Member: ilija
Latest Articles
View Thread

HellBound Hackers | Computer General | Web hacking

Page 3 of 3 < 1 2 3
Author

RE: website defacement


Member

Your avatar

Posts:
Location:
Joined: 01.01.70
Rank:
Guest
Posted on 26-08-08 03:20
hacker2k wrote:
^ --- Just in case you didn't know where this conversation came out of. It started out as just skids so it should be continued as just skids


Then, like I said... that conversation is pointless.


If you want to talk about the rest of the people who are malicious and don't report the vulnerabilities or code explotis for milw0rm, then you should really be talking about how to keep their options limited after they get in and how to do intrusion analysis.


I'd settle for that as having more purpose than an anti-skid methodology discussion.


I'm also not talking about using a password or anything for authentication. I'm talking about just using some protection so that you can't just make the browser to a get request or whatever to a site thinking it's getting something useful to the site and end up doing something that will harm you. That's why I said a captcha alone could help in that case.


Can't say I see how a captcha would inhibit CSRF attempts. Would love to hear your ideas on that.



Here's a better link: http://www.techni. . .s/CSS.html

If we're going to get into semantics, then let's just refer to everything as an "attack vector" when we're discussing specifics. That way, we can ignore classification based upon intended use and just play dictionary tag every time we try to perceive meaning.


You want logical and well-founded points? If they go off-track or seem like they're doing something malicious, log it.


Too broad.


Stay on top of patches if it isn't your own code. If it's our own code, do regular tests on it to see if you can find a vulnerability. Have friends also test it out to see if they can find bugs in it that could be exploited. Fix even the tiniest bug because that could turn into a vulnerability. Watch things like bug-track and milw0rm for web-application vulnerabilities. Read everything you can about security.


Still broad and lofty, but better. Why do the tips only apply to people that own the code... not including the people that use it?


Author

RE: website defacement


Member

Your avatar

Posts:
Location:
Joined: 01.01.70
Rank:
Guest
Posted on 26-08-08 12:04
I was just thinking of a captcha because since it's going to probably be unique for anyone, the attacker couldn't abuse a trust relationship by changing passwords, etc. through use of javascript or even just html. For example, Bill is administrator of site A which is vulnerable to a CSRF attack which would allow an attacker to change Bill's password. Bill then goes to a site owned by the attacker. Bill didn't log off of site A so his session is still there. The attacker uses an image to execute the change password script on site A. Since site A made no attempt to make sure the person requesting a password change was a human and not a browser, the attacker was able to gain administrative access of site A. By using a captcha, it would distinguish between a browser making an automated request (which could've been forced by a malicious attacker) and a user clicking the button. Really anything that could be unique to a session could be used to protect against CSRF. I don't know a lot about CSRF so I might be wrong with my thinking, but that should protect against it from what I understand. Correct me if I'm wrong though.

I don't use a lot of other people's code so I don't really know much about to protect against attacks in code that isn't your own except for common sense like staying on top of patches. Read about common vulnerabilities is what I meant for read everything about security, I know that really really wasn't clear. By doing regular tests I meant try to hack your web-applications and see if you can find vulnerabilities. If you are a big enough company, hire penetration testers (that's good for both your code and other peoples' code). For going off-track I meant if it looks like they're trying to find a vulnerability, have it logged. You can do that while filtering input. Did I clarify it about?

Edited by on 26-08-08 12:32
Author

RE: website defacement


Member

Your avatar

Posts:
Location:
Joined: 01.01.70
Rank:
Guest
Posted on 26-08-08 13:47
hacker2k wrote:
I was just thinking of a captcha because since it's going to probably be unique for anyone, the attacker couldn't abuse a trust relationship by changing passwords, etc. through use of javascript or even just html. For example, Bill is administrator of site A which is vulnerable to a CSRF attack which would allow an attacker to change Bill's password. Bill then goes to a site owned by the attacker. Bill didn't log off of site A so his session is still there. The attacker uses an image to execute the change password script on site A. Since site A made no attempt to make sure the person requesting a password change was a human and not a browser, the attacker was able to gain administrative access of site A. By using a captcha, it would distinguish between a browser making an automated request (which could've been forced by a malicious attacker) and a user clicking the button. Really anything that could be unique to a session could be used to protect against CSRF. I don't know a lot about CSRF so I might be wrong with my thinking, but that should protect against it from what I understand. Correct me if I'm wrong though.


I know there are some sites that protect against off-site navigation and such. Best I can tell, it's probably a case where a link is given a JS check for either a non-relative web address or an absolute address that doesn't direct to a site on the blah.com domain. If it leads off the site, an intermediate page is served up that captures the page you came from and the destination ($_SERVER variable for the first, maybe a session var on the second) that asks you if you're sure.

Now... we couldn't just add a manual link check (anchor tag only) because CSRF is not limited to that; while href and src are the most common attributes for a CSRF attack, there's nothing to say that any old JS event and a window.location assignment won't serve the same purpose. So, we'd have to capture the JS event that occurs when a page is navigated away from, then do our check. That would be the onunload or onbeforeunload events, which we'd put in our body tags and toss in our JS function to check the destination URL. As for getting the destination URL in JS, that is where I'd have to do some homework... really, I haven't bothered with anything like this because I disallow HTML altogether. But, it could be done... and, if you want to do the middle page to check if you're sure, you could have the JS event populate hidden fields in a form on the page and submit the form with the action to navigate to that middle page. For a POST.

So, really, a captcha wouldn't be any more helpful than a normal page. Captchas are just for preventing bot automation of a page's form.


I don't use a lot of other people's code so I don't really know much about to protect against attacks in code that isn't your own except for common sense like staying on top of patches. Read about common vulnerabilities is what I meant for read everything about security, I know that really really wasn't clear. By doing regular tests I meant try to hack your web-applications and see if you can find vulnerabilities. If you are a big enough company, hire penetration testers (that's good for both your code and other peoples' code). For going off-track I meant if it looks like they're trying to find a vulnerability, have it logged. You can do that while filtering input. Did I clarify it about?


Well, as long as you're using the same language, it's going to be the same practices at work. Other people's code might be harder to read or understand, but that doesn't really matter... Common vulnerabilities are good to read about, but secure coding practices are better and more comprehensive.

Regular tests on your site are good as long as the testers are people you either trust or have contractually bound. Still... "going off-track" is vague. Sit down and make a list of the things you'll be tracking... the patterns you'll be looking out for, and you'll realize that's a pretty big and difficult list.


Page 3 of 3 < 1 2 3