I have a PHP script that scores tests for about 2200 students. When it scores an individual student's record, MySQL writes the results out to a table for each item. The array for each individual student has about 225 values (i.e., has info on each test item).
This process takes about 1 min. and 40 secs. Not bad, but I notice that the hard drive is working furiously all the while.
Is there a way to optimize the memory so that the HD doesn't have to write the data for every student (which is what it seems to be doing), but can "bulk" update after a delay and speed up this process?
I've read a lot of information about tweaking MYSQL's use of memory, but it was too technical for me to understand. But if someone could direct me, I would be delighted to continue my research.
My computer has WINXP with 4 GB of memory. Lots of free disk space. Using localhost only.
Meh I disagree with that article. I was working on a coldfusion project and we used stored procedures and the speed increase was at least 20%. The guy has some good points on other things such as time to implement but he is wrong on the speed. I know that one for a fact. I did many a tests on this when I was writting them for the project. btw the project was with mssql who knows what he was using it didn't say and that could have a huge difference.
and Dave could you possibly store the info into a session and then say once the variable gets to be a certain length you write it to the database and erase it from the session? Haven't ever tried that but it would seem like it could work.
I do believe this could be done by altering your array to have one more dimension such as:
Then, you could build an insert query using that array. MySQL allows you to insert multiple rows of data in one query like is shown here. If you run into a packet size error, you can alter that setting in your mysql config file as it is on your machine.
"Why is it drug addicts and computer aficionados are both called users?" -Clifford Stoll