Although SQL queries are simple commands that are very easy to learn and use, but not all queries and MySQL database functions operate with the same efficiency. This becomes very important as the amount of information you are storing increases and, if your database is backing a web application, as your site’s popularity increases.
Here are some ways to optimize database queries to make your application interact with database in more efficient ways:
In MySQL it is very easy to enable slow query log. All you need to do is just copy the below queries and run in your phpMyAdmin, It will enable slow query in your MySQL server.
SET GLOBAL slow_query_log_file = 'slow_query.log'; SET GLOBAL long_query_time = 10; SET GLOBAL slow_query_log = 'ON';
After running above SQL query, run the blow SQL query one by one to check MySQL slow query log is enabled or not.
SHOW GLOBAL VARIABLES LIKE 'slow\_%';
It will list out all MySQL variable that starts with “slow” like this.
We have successfully enabled slow query log in MySQL server. Now run below query to check long_query_time we set.
SHOW GLOBAL VARIABLES LIKE 'long_query_time';
Now access your application via browser, it will log out all slow queries that running in particular page of the application. Now go to the MySQL server directory ‘data’ folder , where we can find file named ‘slow_query.log’. It will contain all slow queries that logged.
This is another simple to find out all the queries, that are not using indexes while fetching records from MySQL server. Now run the blow query to enable query log that are not using indexes.
SET GLOBAL log_queries_not_using_indexes = 'ON';
Now run the blow query to check ‘log_queries_not_using_
SHOW GLOBAL VARIABLES LIKE 'log_queries_not_using_
This queries will be logged in the MySQL general_log_file.
If your table columns is not properly indexed then all the queries will run very slowly, that in turn reduces the application performance very badly. For example to fetch single record from the table that contains lakhs of records, then it will check all the lakhs of records to fetch sinlge record. It’s like searching a single page in 1000 pages book, we have to go through each page to get exact page.
If we are indexed our table properly then MySQL fetches that single record very elegant way by going to indexes very easily and quickly. It’s like search a page by going through index of the book and going to exact page very easily.
For example I am having users table with following structure, Only the id column is indexed.
CREATE TABLE IF NOT EXISTS `users` ( `id` int(11) NOT NULL, `email` varchar(75) NOT NULL, `password` varchar(75) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Indexes for table `users` -- ALTER TABLE `users` ADD PRIMARY KEY (`id`);
While making login request where we will check both email and password of the user against all the user records to authenticate user successfully. In order to optimize our login query we have index both email and password properly like this.
ALTER TABLE `users` ADD INDEX `email_pdx` (`email`, `password`);
To check our index is set properly, then run the following query. It will list out all the indexes from the Users table.
SHOW INDEX FROM users;
While indexing consider following things.
1. Don’t index table columns that have less number records.
2. Index all the table columns that we are used in the WHERE & JOIN conditions.
Use the following queries to add indexes to the table columns.
//Adding Index To the Table; CREATE INDEX index_name ON table_name(column_name); //Altering indexes ALTER TABLE table_name ADD INDEX ( column_name );
Over use indexes also causes to slow down Query performance for select, insert and update statements very badly, and also takes up lot of disk space. Use indexes very carefully, so remove the duplicate indexes from the tables. Look at the below table which has duplicate indexes..
CREATE TABLE IF NOT EXISTS `users` ( `id` int(11) NOT NULL, `email` varchar(75) NOT NULL, `password` varchar(75) NOT NULL, PRIMARY KEY (`id`), KEY `email` (`email`), KEY `email_1` (`email`), KEY `email_password` (`email`,`password`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
So remove the duplicate indexes from the above table.
ALTER TABLE users DROP INDEX email; ALTER TABLE users DROP INDEX email_1;
The unused indexes also causes to slow down the select, update and insert statements and takes up lot of disk spaces like duplicate indexes. So manually verify the unused indexes and drop it.
When we do MySQL query performance analyzes, we use “Explain” statement to analyze our queries.
This Explain statement will list out possible indexes used as well number records searched to fetch the record we need. From description we can add index to possible table columns.