This site is no longer active and is available for archival purposes only. Registration and login is disabled.

Linked Lists


I love linked lists;)

Postby andys » Feb 1, 2002 @ 2:05am

andys
pm Member
 
Posts: 230
Joined: Jan 19, 2002 @ 6:19pm
Location: London


Postby Digby » Feb 1, 2002 @ 6:03am

If all of the items in the list are the same size, and you know you aren't trying to delete the last item in the list (or you have a circular list), then removing an item from a singly-linked list can be performed by a simple memcpy.

You copy the entire contents of the next item in the list to the current item, then you delete the next item (or move it to the "free" list).

Here's a singly-linked list with 4 items:

A->B->C->D

If you want to delete item B, you copy C (incl. C's pNext pointer) to B, then delete C. Since C's pNext pointer pointed to D, after the copy, C's pNext pointer now points to D. Pretty slick, eh?
Digby
pm Insider
 
Posts: 1011
Joined: Apr 29, 2001 @ 1:53pm


More linked lists :-)

Postby refractor » Feb 1, 2002 @ 11:46am

Hello.

Just thought I'd clear some things up (mainly regarding the cache hit issue I mentioned).

Firstly, for a majority of cases, the speed loss of a regular linked-list *really* isn't going to matter. If you're writing a text editor or whatever, then a few cache misses aren't going to matter a great deal... if, however, you're writing a game or something that *needs* speed, then it's more of a problem.

Regarding Pam's question, Digby's reply is correct. A list of pointers into the data doesn't really help all that much - you're just increasing the chances that the next pointer to the next data is in the cache... but the data you *need* from the object may not be - there's no point looking up a pointer if you always have to use some of the object to figure out if it's the one you want.

To best organise the linked list on a processor, you want to put all of the data used in your traversal comparison into the cache-line width of that processor, and align data to a cache-line boundary.

So, on the StrongARM, for example, has a 32-byte cache line width.

If I request a single byte of data, 32-bytes will be requested and loaded into the cache (if it's in a cacheable area - let's pretend it is :lol: ).

If my object is, say, 64- bytes long, structured like this:

long l_my_id
char c_filler[56]
pointer p_next

And my (nasty pseudo) code goes:

obj = head_object

// we want to find object 63
while obj.l_my_id <> 63 and obj.p_next!=null
obj = boj.p_next
loop

First the processor will load the first 32-bytes of the object (to read l_my_id)... then it'll read the next 32-bytes to read the p_next.

If I reorganise my data *slightly*:
long l_my_id
pointer p_next
char c_filler[56]

... then I'm only ever going to bother loading *one* cache line per object for the traversal. This means I load 1/2 as much data, which increases the chances of things hitting the cache.

You could also store the "traversal" data (the pointer and the items used in the comparison) in one area along with a pointer to the data you rarely use.

Sometimes optimisation is more about thinking about clever data placement rather than making the code itself faster :-)

MirekCz is correct too, if you're allocating data using malloc for each small structure, you're going to suffer from memory fragmentation, and nasty speed issues. If you allocate a block of memory (say 64 at a time), then you can just use that as a set of empty links. When you want to add an object, you simply pick the next new object in your array. When they're all used, allocate another block. That means you only hit the malloc() function once every 64 objects.

Cheers,

Refractor.
refractor
 


Postby Pam » Feb 1, 2002 @ 1:56pm

All the easy problems have been solved.
User avatar
Pam
pm Insider
 
Posts: 449
Joined: Jan 24, 2002 @ 10:30pm
Location: Ohio


Postby andys » Feb 1, 2002 @ 5:47pm

andys
pm Member
 
Posts: 230
Joined: Jan 19, 2002 @ 6:19pm
Location: London


Postby RwGast » Feb 9, 2002 @ 8:12pm

http://www.angelfire.com/ego/esoteric if you like to play quake3 keep your eye on this site
User avatar
RwGast
pm Member
 
Posts: 1123
Joined: Jun 28, 2001 @ 7:36pm
Location: California, USA


Postby Digby » Feb 9, 2002 @ 8:29pm

Digby
pm Insider
 
Posts: 1011
Joined: Apr 29, 2001 @ 1:53pm


Postby andys » Feb 9, 2002 @ 9:30pm

andys
pm Member
 
Posts: 230
Joined: Jan 19, 2002 @ 6:19pm
Location: London


Postby RwGast » Feb 10, 2002 @ 5:33am

http://www.angelfire.com/ego/esoteric if you like to play quake3 keep your eye on this site
User avatar
RwGast
pm Member
 
Posts: 1123
Joined: Jun 28, 2001 @ 7:36pm
Location: California, USA


Postby RwGast » Feb 10, 2002 @ 5:43am

http://www.angelfire.com/ego/esoteric if you like to play quake3 keep your eye on this site
User avatar
RwGast
pm Member
 
Posts: 1123
Joined: Jun 28, 2001 @ 7:36pm
Location: California, USA


Postby RwGast » Feb 10, 2002 @ 5:49am

http://www.angelfire.com/ego/esoteric if you like to play quake3 keep your eye on this site
User avatar
RwGast
pm Member
 
Posts: 1123
Joined: Jun 28, 2001 @ 7:36pm
Location: California, USA


Postby Digby » Feb 10, 2002 @ 5:57am

It's crashing because you've told printf that you're supplying a null-terminated string, when in fact you're supplying a single character.
Digby
pm Insider
 
Posts: 1011
Joined: Apr 29, 2001 @ 1:53pm


Postby RwGast » Feb 10, 2002 @ 6:08am

heh, that was a really stupid ass mistake im sorry. But what about the advantages of using bit comparison over single varible checks? Is it just to make it easyer to pass multiple option to your function this way, i mean is ease the only advantage not speed?
http://www.angelfire.com/ego/esoteric if you like to play quake3 keep your eye on this site
User avatar
RwGast
pm Member
 
Posts: 1123
Joined: Jun 28, 2001 @ 7:36pm
Location: California, USA


Postby MirekCz » Feb 10, 2002 @ 11:50am

RwGast:about keeping flags in an int.. first of all, int is 32bpp, so you can keep 32flags per int.
And here are advantages/disadvanteges as I see them:
1.checking if a flag is set is pretty simple and doesn't really take much more time
2.you need much less memory(good for larger arrays)
3.reading is actually faster if you read few flags from one int and/or if your cpu stores them in cache(which is much harder without flags because cache gets trashed with mostly empty values and it's also harder to find a value you want to check next in cache)
4.it saves a lot of space in files if you use it
5.you can store a mixed data in such an int, for example 4*4bits numbers+16flags, great for making maps etc which take really little mem.
6.and yeah, m$ uses it;P

hope it helps you find a good use for it:)
With best regards,
Mirek Czerwinski
User avatar
MirekCz
pm Member
 
Posts: 269
Joined: Sep 18, 2001 @ 6:42pm
Location: Poland,city Poznań


Previous

Return to Windows Mobile


Sort


Forum Description

A discussion forum for mobile device developers on the Windows Mobile platform. Any platform specific topics are welcome.

Moderators:

Dan East, sponge, Digby, David Horn, Kevin Gelso, RICoder

Forum permissions

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

cron