ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.) Filing date Publication date Priority to US25227300P priority Critical Priority to US09/878,877 priority patent/US6847977B2/en Priority to US09/878,876 priority patent/US6941300B2/en Priority to US09/878,866 priority patent/US6877002B2/en Application filed by America Online Inc filed Critical America Online Inc Priority to US11/054,701 priority patent/US9110931B2/en Publication of US20050193014A1 publication Critical patent/US20050193014A1/en Assigned to AMERICA ONLINE, INC. Original Assignee America Online Inc Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.) ( en Inventor John Prince Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.) Granted Application number US11/054,701 Other versions US9110931B2 Google Patents US20050193014A1 - Fuzzy database retrieval The program was reading only 100 8kB blocks (to take advance od STORAGE CACHE) in RANDOM ORDER: I have run C program written by my colleague Vlado on our AIX P6 4200MHz on 64kB stripped FS with 5lunes attached (2Gb fibres) to IBM DS 8300. Maybe you should find your old C proram for testing IO of that SSD disk :) I would like to note extra one point, I consider your single clock reads from SSD too slow, too! How is the SSD drive attached to system? It is some kind of internal disk (pluged directly to server) or is it attached to disk array? Is it really an performance issue, or just some bug of Oracle Wait Interface? 0Īnswers to the following question on a postcard please: Why do we get a “double hump” on the distribution of multiblock reads ?Īny updates to this issue? It is extremly interesting, at leat to me. (Most of the read by other session waits are waiting on the flash cache read as well – so I’ll be aiming at two birds with one stone.)ĭb flash cache single block ph 3,675,650 5,398 1 35.2 User I/O Given the number of reads from flash cache in the hour the tiny number of write waits isn’t something I’m going to worry about just yet – my plan is to get rid of a couple of million flash reads first. The first set of figures is from the Top N section of the AWR, the second set is from the event histogram sections (the 11.2 versions are more informative than the 11.1 and 10.2 – even though the arithemtic seems a little odd at the edges). The hardware is pretty good at single block reads – but there’s a strange pattern to the multiblock read times. It’s interesting to see the figures for single and multiblock reads from flash cache. Here’s an extract from a standard 11.2.0.2 AWR report: So what’s the next move when you’ve got 96GB of flash cache plugged into your server (check the parameters below) and see time lost on event write complete waits: flash cache ?ĭb_flash_cache_file /flash/oracle/flash.dat
Have you ever heard the suggestion that if you see time lost on event write complete waits you need to get some faster discs.