Shared pool memory


Addendum: April 12 - a resolution to this quirk now at the bottom of this page

I've given this a whirl on most versions from 8.1.5 on NT and Solaris, and got similar result on all. I've seen some literature in the past saying that at startup, the database reserves about half of the shared pool memory to be released as memory comes under pressure.

To see if this was true, I played about with some shared pool sizes and they all equated to this hypothesis, for example

Immediately after startup (actually a "startup nomount" on a non-existent database), then when the shared pool is set to 40000000 (40 meg) in init.ora, the following is observed

select ksmchcls, sum(p.ksmchsiz) tot
from sys.x$ksmsp p
group by ksmchcls;

KSMCHCLS             TOT
----------- ------------
R-free           2000000
R-freea               40
free            16541304
freeabl          2514628
perm            21760488
recr              701904

which is what I would expect - about half of the memory held back in the 'perm' area. This remains the case up to about 67 meg. But once I exceed this figure, a startling change occurs. When I set the shared_pool to 70000000 (70 meg) in init.ora and "startup nomount" again, then you see

select ksmchcls, sum(p.ksmchsiz) tot
from sys.x$ksmsp p
group by ksmchcls;

KSMCHCLS             TOT
----------- ------------
R-free           3500000
R-freea               40
free            60401700
freeabl          2524560
perm             6409504
recr              682552
So all of a sudden, the bulk of the memory is marked 'free'. The crossover point is about 67360000 to 67370000. I've yet to find out why or get any explanation from Oracle - so if anyone gets any ideas, please drop me a line.


Addendum Some information I got from Oracle

On a 32 bit system the maximum size of a single chunk of memory in the SGA under KGH is 67108852 bytes. If the SGA is larger than this multiple 'free' chunks are created each of this size, and one as the remainder. So how does affect our example:

For a 20M shared pool, at startup it is initialised to a 20M 'free' chunk. The first permament request then converts this to a 'perm' chunk. Of course, the first non-permanent request needs some 'free' memory, so half of the perm is released back as 'free' (after the which the requested size is then grabbed). Allocations then come out of the 'free' area until its all gone.

For a 70M shared pool, at startup it is initialised to a 3M 'free' chunk and a second 'free' chunk of 67M (the largest allowed). The first permament request then converts the 3M to a 'perm' chunk, leaving the 67M chunk now as 'free'. The first non-permanent request now already has a 'free' area to start grabbing memory from, and we see the results as seen in X$KSMSP. The distribution hence jumps about as the shared pool is increased as the graph below shows.

As the shared pool size is increased to larger sizes, you can see this manifest itself in several large 'free' chunks, and a single 'perm' chunk, none of which is larger than 67M.