During the build of our new servers which were built with Solaris 10 using ZFS boot disks we came across an issue with some of our DB and App servers.
My understanding of the ARC cache is that the ZFS, by default, uses all available memory but will give it to any other services, just grudingly which manifests itself in a slight delay... This delay can cause some DB/Apps problems - to get around this you can change the ARC Cache settings which restricts the amount of memory that it can use (Hard Limit Size).
The file to be edited is the /etc/system file.
Type the following (I seem to remember that there is no prior entry, so this will be the first ZFS ARC cache entry).
* ZFS ARC cache entry (comment)
set zfs:zfs_arc_max=<hard limit size>
Exit and save the file.
Then the system will need rebooting for the changes to take affect.
NOTE: Hard Limit Size should be set to the limit of remaining memeory AFTER you've taken into consideration your Applications requirements.
Example: The system has 16Gb RAM, the Application(s) require 12Gb - which leaves a balance of 4Gb RAM, this will be the Hard Limit Size.
The Hard Limit Size needs to be written in byte format - using the example above that would make it (1024 x 1024 x 1024 x 4 = 4294967296).
Checking ZFS ARC cache settings
There are a couple of "tools" out there where you can check the cache settings. They are freely available somewhere on the Internet - I say somewher, if I put a link it'll bound to get broken at some point!
Search for the following arc_summary.pl and arcstat.pl.
I've tended to use the arc_summary.pl file more than the other as it gave me the information I needed...
System Memory:
Physical RAM: 15579 MB
Free Memory : 417 MB
LotsFree: 241 MB
ZFS Tunables (/etc/system):
ARC Size:
Current Size: 4517 MB (arcsize)
Target Size (Adaptive): 4518 MB (c)
Min Size (Hard Limit): 1819 MB (zfs_arc_min)
Max Size (Hard Limit): 14555 MB (zfs_arc_max)
ARC Size Breakdown:
Most Recently Used Cache Size: 100% 4518 MB (p)
Most Frequently Used Cache Size: 0% 0 MB (c-p)
ARC Efficency:
Cache Access Total: 2289867603
Cache Hit Ratio: 97% 2225242330 [Defined State for buffer]
Cache Miss Ratio: 2% 64625273 [Undefined State for Buffer]
REAL Hit Ratio: 96% 2211505671 [MRU/MFU Hits Only]
Data Demand Efficiency: 97%
Data Prefetch Efficiency: 60%
CACHE HITS BY CACHE LIST:
Anon: --% Counter Rolled.
Most Recently Used: 10% 222683342 (mru) [ Return Customer ]
Most Frequently Used: 89% 1988822329 (mfu) [ Frequent Customer ]
Most Recently Used Ghost: 0% 692367 (mru_ghost) [ Return Customer Evicted, Now Back ]
Most Frequently Used Ghost: 1% 26184577 (mfu_ghost) [ Frequent Customer Evicted, Now Back ]
CACHE HITS BY DATA TYPE:
Demand Data: 86% 1925807410
Prefetch Data: 0% 9177425
Demand Metadata: 12% 282516967
Prefetch Metadata: 0% 7740528
CACHE MISSES BY DATA TYPE:
Demand Data: 89% 57756619
Prefetch Data: 9% 5918790
Demand Metadata: 1% 842028
Prefetch Metadata: 0% 107836
arcstat.pl outputs every second (or two)
Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
15:33:13 2G 64M 2 58M 2 6M 26 949K 0 4G 4G
15:33:14 132 0 0 0 0 0 0 0 0 4G 4G
15:33:15 100 0 0 0 0 0 0 0 0 4G 4G
15:33:16 74 0 0 0 0 0 0 0 0 4G 4G
15:33:17 64 0 0 0 0 0 0 0 0 4G 4G
15:33:18 36 0 0 0 0 0 0 0 0 4G 4G
No comments:
Post a Comment