well ive secesfuly gotten the textures form guildwars
tell me if you are interested
i know how to get all shaders and textures and even wire frame form guildwars so let me know
Important information: this site is currently scheduled to go offline indefinitely by end of the year.
Guild Wars .DAT File
-
- advanced
- Posts: 45
- Joined: Sun Jul 16, 2006 6:54 am
- Location: Around
-
- n00b
- Posts: 10
- Joined: Mon Jan 15, 2007 1:30 pm
- Location: My Mousehole
- Contact:
Possible intresting info...
I just deleted my gw.dat as part of a defrag and once I re-downloaded it I noticed that my settings for interface, graphics, sound etc was messed up and had to be re-adjusted along with my account remember me information (acct email) which was unchecked and missing, so it seems this information is also stored in the gw.dat file.
Hopefully this info helps the group as (if i'm correct) the upper section of the gw.dat file (should be the section that) contains this information, meaning that there is a way to use this to see the differences between two almost identical gw.dat files
Anyone let me know if they notice this effect too?
Hopefully this info helps the group as (if i'm correct) the upper section of the gw.dat file (should be the section that) contains this information, meaning that there is a way to use this to see the differences between two almost identical gw.dat files
Anyone let me know if they notice this effect too?
Last edited by AnonAMouse on Tue Jan 23, 2007 11:25 pm, edited 1 time in total.
-
- advanced
- Posts: 45
- Joined: Sun Jul 16, 2006 6:54 am
- Location: Around
Just out of curiosity did you delete any other files/folders? While storing client data on their physical computer is far from unheard of in an MMO, I've never seen an instance where it's located in an archive.
I have to hand it to these guys, the programmers over at NCSoft are clever b*stards But on the other hand it's been stated that most of the architecture is designed to baffle any potential hacker into submission.
I can only imagine how fun this is going to be once quantum mechanics becomes the security standard
I have to hand it to these guys, the programmers over at NCSoft are clever b*stards But on the other hand it's been stated that most of the architecture is designed to baffle any potential hacker into submission.
I can only imagine how fun this is going to be once quantum mechanics becomes the security standard
-
- n00b
- Posts: 10
- Joined: Mon Jan 15, 2007 1:30 pm
- Location: My Mousehole
- Contact:
Dissapearing User Data...
Nope, but as i mentioned before this is 'dumb' data and game information, such as your account email and screen res/pc settings for the game.OneOneSeven wrote:Just out of curiosity did you delete any other files/folders? While storing client data on their physical computer is far from unheard of in an MMO, I've never seen an instance where it's located in an archive.
From what I have been able to fathom out from various research is the hardcore info is serverside, we just get the gfx, sound etc. and our pc's get told when to use this stuff.OneOneSeven wrote:I have to hand it to these guys, the programmers over at NCSoft are clever b*stards But on the other hand it's been stated that most of the architecture is to baffle any potential hacker into submission.
Look at the fact there isn't any local output for the "Logitech® G15" keyboard's lcd screen, oh they have it for "Auto Assault" but,
not for Guild Wars see here: Play NC Support: Logitech® G15
Yup, and "Windows® Vista™" isn't supported by Guild Wars either yet see here: Play NC Support: Windows® Vista™OneOneSeven wrote:I can only imagine how fun this is going to be once quantum mechanics becomes the security standard
-
- advanced
- Posts: 45
- Joined: Sun Jul 16, 2006 6:54 am
- Location: Around
Re: Dissapearing User Data...
Generally that's the way it works with most online/multiplayer games. Transmitting anything else is a huge bandwidth hog.AnonAMouse wrote:From what I have been able to fathom out from various research is the hardcore info is serverside, we just get the gfx, sound etc. and our pc's get told when to use this stuff.OneOneSeven wrote:I have to hand it to these guys, the programmers over at NCSoft are clever b*stards But on the other hand it's been stated that most of the architecture is to baffle any potential hacker into submission.
Also I believe most MMO hacks revolve around reverse-engineering of certain resources used in tracking the player and their actions. They can then take advantage of this to spit out packets with information that has been modified by said client, and can result in duping the server into thinking the player possesses certain items or quantities or even abilities and stats. The additional layers of protection help ensure that no hacker will be able to figure out a means of fooling the server. In certain cases certain ingame assets can be modified for the player's advantage, such as changing the player skins in Counter-strike to bright orange or blue, thereby giving ample warning as to the positions of other players. In theory some of the sounds (e.g. silenced weapons) could also be muddled with in order to completely circumvent the benefit.
On the other hand things like this are sometimes used for good rather than evil. I'm thinking primarily of the old (really good) SWG in this instance, but similar concepts can be found throughout the community. Anyhow there's been an ongoing project to reverse-engineer the communications for the pre-EQing client and server taken from old packet logs. Considering the minute amount of information available to them, the dev team has done an admirable job, succeeding in not just creating a connection the program will accept, but even adding most functionalities back to the game, such as vendors. This new server program has been released within the community, and certain benevolent individuals are operating industrial-strength server machines and are hosting their own SWG server.
-
- n00b
- Posts: 10
- Joined: Mon Jan 15, 2007 1:30 pm
- Location: My Mousehole
- Contact:
Hmmm, just spotted a gw.tmp 0 byte file, and doing a search for gw.tmp and found GW.TMP-********.pf a 45.5k prefetch file containing references to gw.tmp too
This is not the whole GW.TMP-********.pf file but to quote the above info:
Note: ******** is an 8 character hex file reference.
Edit: GW.EXE-********.pf found and it contains references to my DirectSong Directory.
I wonder if they are using prefetch to also store directories, filenames and settings?
This is not the whole GW.TMP-********.pf file but to quote the above info:
Dont know if this is of any use, but this file also contains data...\ D E V I C E \ H A R D D I S K V O L U M E 2 \ P R O G R A M F I L E S \ G U I L D W A R S \ G W . T M P \ D E V I C E \ H A R D D I S K V O L U M E 2 \ W I N D O W S \ S Y S T E M 3 2 \ S O R T K E Y . N L S \ D E V I C E \ H A R D D I S K V O L U M E 2 \ P R O G R A M F I L E S \ G U I L D W A R S \ G W . E X E
Note: ******** is an 8 character hex file reference.
Edit: GW.EXE-********.pf found and it contains references to my DirectSong Directory.
I wonder if they are using prefetch to also store directories, filenames and settings?
-
- advanced
- Posts: 45
- Joined: Sun Jul 16, 2006 6:54 am
- Location: Around
-
- advanced
- Posts: 60
- Joined: Fri Feb 02, 2007 9:58 pm
- Been thanked: 2 times
GW
OK. I've made some progress on this.
Theory 2 (from the article page) and flag theory 2 are the correct ones. Theory 1 can be thrown out.
I figured out how the hash table works.
Here's what happens.
When the program reads the "big file" section mentioned in the discussion, it first makes sure that it has the proper number of files, by reading through the whole list, skipping zero entries (which can happen if your gw.dat has been around for a while).
Providing this is ok, it then does a binary sort of the "big file" section sorting according to the offset of the file. It does this so it can use the hash table. So they are in order by block offset.
Then, it does something that I'm not completely sure why happens. It then reads through each record of the big file again, taking the size of the file and comparing it to the location of the next record. If they mesh, it then continues on. But sometimes, the files appear to overlap, in which case the program then makes a linked-list containing these "big" records. This happens even in a fresh gw.dat. So some fo the records are not contiguous? Maybe this has something to do with the ffna files. I'll have to look.
After this is finished, it moves the sorted list in memory and then loads in the "hash" table.
It makes sure that there are the right number of records (which seems to be total number of records shifted right (divided) by 3) and that none of them are blank and that the records are sequential. Not every seq. number is represented, for example it might go 10,10,1F because of the size of some of the files I think.
Anyway, it then makes a linked-list of this too, appending a counter to it in memory. The it gets sorted as well.
Here's what that hash file actually does. Each record is two double-words. The left-most double word is a FILE NUMBER, NOT A COUNTER. The right-most is an INDEX where the file resides in the gw.dat.
Some files have TWO entries, which is because the pointer of the file won't fit in 32-bits all the time. At least so I think so far. This might also be why some of the files that seem to be larger than allowed are thrown into the linked-list.
So anyway, let's say for example the program wants to read the preferences file. It happens to be file number 3. It looks file number 3 up in the hash table, and gets the index of 1cbba.
Then, it performs the following calculation on it:
It would take, for example, 1cbba and multiply it by two, and then add 1cbba to it.
Then, that value is multiplied by 8 and then added to the very beginning of the MFT. NOT the section where the big file resides, but the true beginning of the mft (where the word 'mft' actually is).
Pseudo code would by:
FileIndex = 1cbba; (note: this number changes when the .dat gets updated)
FileIndex = FileIndex + (1cbba * 2);
FileIndex = BeginningofMFT + (FileIndex * 8 )
BTW. File #10 is the executable itself, which is in the gw.dat probably to compare checksums so people like me don't fiddle with anything.
Now, I'm not 100% sure yet HOW it determines what files to look up, but at least SOME of them are hard-coded. The preferences file, which is 3, is hard coded.
Next up is busting the decompression so we don't have to use the stupid GW itself to decompress the files.
It look to be a variant of Huffman encoding, and doesn't look all that complicated. Itr creates a big Huffmn tree and pushes it on the stack, and then uses it as a look-up to decode. Shouldn't take me too long.
Then, it is on to decoding the file formats.
Is anyone else working on this?
BlackDragon
Theory 2 (from the article page) and flag theory 2 are the correct ones. Theory 1 can be thrown out.
I figured out how the hash table works.
Here's what happens.
When the program reads the "big file" section mentioned in the discussion, it first makes sure that it has the proper number of files, by reading through the whole list, skipping zero entries (which can happen if your gw.dat has been around for a while).
Providing this is ok, it then does a binary sort of the "big file" section sorting according to the offset of the file. It does this so it can use the hash table. So they are in order by block offset.
Then, it does something that I'm not completely sure why happens. It then reads through each record of the big file again, taking the size of the file and comparing it to the location of the next record. If they mesh, it then continues on. But sometimes, the files appear to overlap, in which case the program then makes a linked-list containing these "big" records. This happens even in a fresh gw.dat. So some fo the records are not contiguous? Maybe this has something to do with the ffna files. I'll have to look.
After this is finished, it moves the sorted list in memory and then loads in the "hash" table.
It makes sure that there are the right number of records (which seems to be total number of records shifted right (divided) by 3) and that none of them are blank and that the records are sequential. Not every seq. number is represented, for example it might go 10,10,1F because of the size of some of the files I think.
Anyway, it then makes a linked-list of this too, appending a counter to it in memory. The it gets sorted as well.
Here's what that hash file actually does. Each record is two double-words. The left-most double word is a FILE NUMBER, NOT A COUNTER. The right-most is an INDEX where the file resides in the gw.dat.
Some files have TWO entries, which is because the pointer of the file won't fit in 32-bits all the time. At least so I think so far. This might also be why some of the files that seem to be larger than allowed are thrown into the linked-list.
So anyway, let's say for example the program wants to read the preferences file. It happens to be file number 3. It looks file number 3 up in the hash table, and gets the index of 1cbba.
Then, it performs the following calculation on it:
It would take, for example, 1cbba and multiply it by two, and then add 1cbba to it.
Then, that value is multiplied by 8 and then added to the very beginning of the MFT. NOT the section where the big file resides, but the true beginning of the mft (where the word 'mft' actually is).
Pseudo code would by:
FileIndex = 1cbba; (note: this number changes when the .dat gets updated)
FileIndex = FileIndex + (1cbba * 2);
FileIndex = BeginningofMFT + (FileIndex * 8 )
BTW. File #10 is the executable itself, which is in the gw.dat probably to compare checksums so people like me don't fiddle with anything.
Now, I'm not 100% sure yet HOW it determines what files to look up, but at least SOME of them are hard-coded. The preferences file, which is 3, is hard coded.
Next up is busting the decompression so we don't have to use the stupid GW itself to decompress the files.
It look to be a variant of Huffman encoding, and doesn't look all that complicated. Itr creates a big Huffmn tree and pushes it on the stack, and then uses it as a look-up to decode. Shouldn't take me too long.
Then, it is on to decoding the file formats.
Is anyone else working on this?
BlackDragon
Last edited by BlackDragon on Wed Feb 21, 2007 9:12 pm, edited 1 time in total.
- Dinoguy1000
- Site Admin
- Posts: 786
- Joined: Mon Sep 13, 2004 1:55 am
- Has thanked: 154 times
- Been thanked: 163 times
In the meantime, feel free to update the WIKI specifications with your findings... If Theory 1 is, indeed, incorrect, it should be deleted.
-
- advanced
- Posts: 45
- Joined: Sun Jul 16, 2006 6:54 am
- Location: Around
I'd be willing to bet the contiguity check is to verify that files haven't been tampered with-- i.e. you can't modify a file by replacing all the data in the original chunk with whatever you intended: if the file's too small, the ends won't line up; if it's too large you can't fill up the space and include the rest offset over at the end. The overlap might have something to do with compression?
-
- advanced
- Posts: 60
- Joined: Fri Feb 02, 2007 9:58 pm
- Been thanked: 2 times
I'm not sure yet. Right now, I don't think it is all that important, as I want to write code for decompression and then see how it determines what files to load in, if they are all hard coded. I wouldn't think it is so...I'll bet that the ffna files have links to other files in them.OneOneSeven wrote:I'd be willing to bet the contiguity check is to verify that files haven't been tampered with-- i.e. you can't modify a file by replacing all the data in the original chunk with whatever you intended: if the file's too small, the ends won't line up; if it's too large you can't fill up the space and include the rest offset over at the end. The overlap might have something to do with compression?
Don't hold me to that though, it's just a thought.
I've had a lot of downtime in the last few days, I'm hoping to pick this back up soon.
BD
-
- advanced
- Posts: 45
- Joined: Sun Jul 16, 2006 6:54 am
- Location: Around
-
- advanced
- Posts: 60
- Joined: Fri Feb 02, 2007 9:58 pm
- Been thanked: 2 times
OK. I finally got back to work on this.
As a side note, DON'T upgrade to Vista yet as I have had an inordinate amount of problems with it.
Anyway, I was playing around with this linked-list record issue mentioned above, and just decided to strip the whole section out of the code just to see what happens.
It loaded everything fine, but there were lots of GFX issues when doing so, and it corrupted my gw.dat.
Oh well.
Anyway, a little more information about this.
After the file pointers are sorted it then goes back through each one record by record. In the last gw.dat I had the first record was 20 (which seemed to be empty), the second was 200, then 400 then a big jump to 15E000 something. Keep in mind this is the physical location in the gw.dat that these numbers are a pointer to. It uses the hash table to find a specific file.
So, what it does it load in the value for the size of a file, run a small algorithm on it (which verifies it is on a 512 byte-boundary) then adds the size of the file to the pointer to the physical location and then stores it.
Then, it compares that stored number to what was calculated PREVIOUSLY to make sure the sizes match.
Sometimes however, the file is SMALLER than what it should be, meaning there is a gap between the calculated physical offsets of this record and the previous one. This is then when the file size and location is pushed into the linked list.
I've tried to look for any kind of file features that stand out, like maybe all of these files in the list are ffna files or something, but haven't seemed to notice a pattern yet.
I'll keep plugging along.
Suggestions and thoughts are welcome.
I've also finally started on the decompression. It doesn't look complicated, just as mentioned previously, but it sure is long.
BD
As a side note, DON'T upgrade to Vista yet as I have had an inordinate amount of problems with it.
Anyway, I was playing around with this linked-list record issue mentioned above, and just decided to strip the whole section out of the code just to see what happens.
It loaded everything fine, but there were lots of GFX issues when doing so, and it corrupted my gw.dat.
Oh well.
Anyway, a little more information about this.
After the file pointers are sorted it then goes back through each one record by record. In the last gw.dat I had the first record was 20 (which seemed to be empty), the second was 200, then 400 then a big jump to 15E000 something. Keep in mind this is the physical location in the gw.dat that these numbers are a pointer to. It uses the hash table to find a specific file.
So, what it does it load in the value for the size of a file, run a small algorithm on it (which verifies it is on a 512 byte-boundary) then adds the size of the file to the pointer to the physical location and then stores it.
Then, it compares that stored number to what was calculated PREVIOUSLY to make sure the sizes match.
Sometimes however, the file is SMALLER than what it should be, meaning there is a gap between the calculated physical offsets of this record and the previous one. This is then when the file size and location is pushed into the linked list.
I've tried to look for any kind of file features that stand out, like maybe all of these files in the list are ffna files or something, but haven't seemed to notice a pattern yet.
I'll keep plugging along.
Suggestions and thoughts are welcome.
I've also finally started on the decompression. It doesn't look complicated, just as mentioned previously, but it sure is long.
BD