Sunday, December 22, 2013

Oracle 10g:Backup archives on hourly basis



Backup archives on hourly basis



To get full recovery of database,We backup archives on hourly basis.

For this,

Use this script for backup.

Check comments for better understanding...

#!/usr/bin/ksh

# Go to backup location.
cd /rmanbck/arch

ddy=`date +%a`
dt=`date +%Y%m%d`

#last archive was backed up sequence
lastredono=`ls -rt *arch*.log|tail -1|sed 's/^.*arch_//;s/[.]log//'|awk -F_ '{print $2}'`
fno=`expr ${lastredono} \- 2`
#From Archive destination, Find the max sequence generations
tno=`find /oracle/arch -name "*.arch" -print|sort|tail -1|sed 's@/oracle/arch/@@'|awk -F_ '{print $1}'`
FNAME=${dt}_${ddy}_arch_${fno}_${tno}
echo $FNAME
rman target / trace /rmanbck/arch/$FNAME.log << EOF
run
{
ALLOCATE CHANNEL disk3 DEVICE TYPE DISK;
backup as compressed backupset FORMAT '/rmanbck/arch/%T_${ddy}_arch_${fno}_${tno}_%U.set' archivelog from sequence ${fno} until sequence ${tno};
release channel disk3;
}
EOF
exit;


Saturday, November 16, 2013

Oracle 10g:How to use SQL Tuning Advisor.

SQL Tuning Advisor



Dear User,

Read More about SQL Tuning Advisor from Oracle site



***Tuning SQL Queries by Oracle SQL Tuning Advisor***

A)SQL Query tuning by SQL_TEXT.

1.Get the Query info from AWR Report like  Top CPU Consuming Queries or SQL Queries by Elapsed time.

SQL>select SQL_TEXT,sql_id from v$sqlarea where sql_id='&sql_id';

2.Get bind variable details from V$SQL_BIND_CAPTURE or DBA_HIST_SQLBIND.

SQL>select address, hash_value,name,last_captured,DATATYPE_STRING,VALUE_STRING from  V$SQL_BIND_CAPTURE where sql_id='&sql_id'; order by last_captured;

SQL>select sql_id,name,last_captured,DATATYPE_STRING,VALUE_STRING from DBA_HIST_SQLBIND where sql_id='&sql_id'  and trunc(last_captured)=trunc(sysdate);

3.Prepare Query and add bind variable values(VALUE_STRING) into SQL Queries.Values should match with DATATYPE that mentioned in DATATYPE_STRING

4.Declare new SQL Tuning task.
Original Text with Bind variables.
SQ>SELECT ''Y'' FROM ***** WHERE ****= '':B3'' AND **** IN ('':B1'','':B2'')';


--Declate Task
DECLARE
 my_task_name VARCHAR2(30);
 my_sqltext   CLOB;
BEGIN
 my_sqltext := 'SELECT ''Y'' FROM **** WHERE ***= ''***'' AND *** IN (''B'',''C'')';

 my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
         sql_text    => my_sqltext,
         user_name   => '&Schema Name',
         scope       => 'COMPREHENSIVE',
         time_limit  => 2000,
         task_name   => 'my_sql_tuning_task4',
         description => 'Task to tune a query on a specified employee');
END;
/


5.Execute task.
BEGIN
  DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task4' );
END;
/

6.Once the step 5 completed.Check SQL Tuning Advisor recommendation.


SQL>SET LONG 1000
SQL>SET LONGCHUNKSIZE 1000
SQL>SET LINESIZE 100
SQL>SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task4')
  FROM DUAL;

7.Drop Task if it is not benefits more than 80%.
  
Begin
  DBMS_SQLTUNE.DROP_TUNING_TASK('my_sql_tuning_task3');
 end;

 /
  
B)SQL Query tuning by SQL_ID. 1.Save this queries to .sql file and provide  input.
INPUT SQL_ID



INPUT SQL_TUNING_TASK
Change spool location.


undefine tsk_name_of_tune;
undefine sql_id;


set head on;
set feedback on;
set pages 9000;
set lines 150;
set serveroutput on size unlimited;


DECLARE
 my_task_name VARCHAR2(30);
 my_sqltext   CLOB;
BEGIN
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
  sql_id          => '&&sql_id',
  plan_hash_value => null,
  scope           => 'COMPREHENSIVE',
  time_limit      => 1000,
  task_name       =>'&&tsk_name_of_tune',
  description     => 'Task to tune a query on a specified employee');
end;
/
clear scr ;

PROMPT ================================
PROMPT Executing the Tuning Task 
PROMPT ================================

BEGIN
  DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => '&&tsk_name_of_tune' );
END;
/


PROMPT ================================
PROMPT Fetching The Plan 
PROMPT ================================

spool /tmp/Month_end_Sep13/&&tsk_name_of_tune
SET LONG 9999999
SET LONGCHUNKSIZE 1000
SET LINESIZE 200
set pages 0

SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( '&&tsk_name_of_tune') FROM DUAl;

spool off;

clear scr ; 

PROMPT ==================================================================================
PROMPT New PLan details are available in : /tmp/Month_end_Sep13/&&tsk_name_of_tune..lst
PROMPT ========================================================================i==========


PROMPT Execute the following cmd to accept the new plan
set long 5000;
col New_plan for a32765 word_wrap;

select  'exec dbms_sqltune.accept_sql_profile(task_name =>'||''''|| '&&tsk_name_of_tune'||''''||',name => '||''''||'&&tsk_name_of_tune'||''''||', replace => TRUE);'  from dual;


Tuesday, October 15, 2013

Oracle 8i:Disaster recovery solution -Standby Database for Oracle version 8.1.7.0.0

Oracle 8i:Standby Database for Disaster recovery solution.






Here is the simple steps for creating standby database in oracle 8i.

This is testing purpose,So do the all steps from creating primary to real time sync of standby database.

1.Create/Configure Primary Database.

a)Create primary database pfile.

db_name = "primary"
instance_name = primary
service_names = primary
control_files = ("/oracle/primary/stdby1.ctl","/oracle/primary/stdby2.ctl")
control_file_record_keep_time=30
open_cursors = 100
max_enabled_roles = 30
db_block_buffers = 40000
db_block_lru_latches=4
cursor_space_for_time=true
session_cached_cursors=40
shared_pool_size = 71680000
shared_pool_reserved_size=9216000
large_pool_size = 10485760
java_pool_size = 10485760
log_checkpoint_interval = 1048576
processes = 300
log_buffer = 163840
log_archive_start = true
log_archive_dest_1 = "location=/oracle/primary/arc/"
log_archive_format = meta_arch_%t_%s.arc
LOG_ARCHIVE_DEST_2 = 'SERVICE=standby MANDATORY REOPEN=60'
LOG_ARCHIVE_DEST_STATE_2 = ENABLE
background_dump_dest = /oracle/primary/bdump
core_dump_dest = /oracle/primary/cdump
user_dump_dest = /oracle/primary/udump
db_block_size = 4096
remote_login_passwordfile = exclusive
os_authent_prefix = ""
java_soft_sessionspace_limit=83000000
java_max_sessionspace_size=1000000000
compatible = "8.1.0"
sort_area_size = 5242880
sort_area_retained_size = 3145728
utl_file_dir=/oracle2
hash_multiblock_io_count = 8

b) Create Password files
$orapwd file=/oracle/dbs/orapwprimary  password=oracle entries=5

c).Create Database .
sql>startup nomount pfile=PATH
sql>create database primary
    controlfile reuse
    logfile '/oracle/primary/redo01.log' size 10M reuse,
            '/oracle/primary/redo02.log' size 10M reuse,
            '/oracle/primary/redo03.log' size 10M reuse
                datafile '/oracle/primary/system01.dbf' size 100M reuse
autoextend on
next 10M maxsize 200M
character set WE8ISO8859P1;

d)Execute script for create dictionary views and tables.
SQL>@?/rdbms/admin/catalog,sql
SQL>@?/rdbms/admin/catproc.sql

2.Put Database in archive log mode.
SQL>shut immediate
SQL>startup mount
SQL>alter database archivelog;
SQL>alter database open;

3.Create standby control file.
SQL>ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/oracle/stprimary.ctl'

4.Set Log Archive dest for standby
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby MANDATORY REOPEN=60';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

in pfile

LOG_ARCHIVE_DEST_2 = 'SERVICE=standby MANDATORY REOPEN=60'
LOG_ARCHIVE_DEST_STATE_2 = ENABLE

5.Take Hot/Cold backup of Database,pfile,passwordfiles.

6.Create/Configure standby Database

a) Copy the primary database backup,pfile,passowrd files to same location to standby database.

b) Create pfile for standby database from copied pfiles.
 
db_name = "primary"
instance_name = standby1
service_names = primary
control_files = ("/oracle/primary/stdby1.ctl","/oracle/primary/stdby2.ctl")
control_file_record_keep_time=30
open_cursors = 100
max_enabled_roles = 30
db_block_buffers = 40000
db_block_lru_latches=4
cursor_space_for_time=true
session_cached_cursors=40
shared_pool_size = 71680000
shared_pool_reserved_size=9216000
large_pool_size = 10485760
java_pool_size = 10485760
log_checkpoint_interval = 1048576
processes = 300
log_buffer = 163840
log_archive_start = true
log_archive_dest_1 = "location=/oracle/primary/arc/"
log_archive_format = meta_arch_%t_%s.arc
LOG_ARCHIVE_DEST_2 = 'SERVICE=primary MANDATORY REOPEN=60'
standby_archive_dest=/oracle/primary/arc/
LOG_ARCHIVE_DEST_STATE_2 = ENABLE
background_dump_dest = /oracle/primary/bdump
core_dump_dest = /oracle/primary/cdump
user_dump_dest = /oracle/primary/udump
db_block_size = 4096
remote_login_passwordfile = exclusive
os_authent_prefix = ""
java_soft_sessionspace_limit=83000000
java_max_sessionspace_size=1000000000
compatible = "8.1.0"
sort_area_size = 5242880
sort_area_retained_size = 3145728
utl_file_dir=/oracle2
hash_multiblock_io_count = 8


 c) Start standby database in nomount mode.

d) Put Copied standby control file to same location as primary(control_files parameter in pfile)

e) Mount Standby database


SQL>alter database mount standby database;

7.Start Recovery of standby Database.
SQL>recover managed standby database;

Cancel recovery

SQL>recover managed standby database cancel;

8.Manually apply Archive GAP.
a)copy all pending archives for standby.
sql>recover automatic standby database;

Tuesday, October 8, 2013

Oracle 10g:Automate AWR reports to get in mail attachment.

Automate Automatic Workload Repository Report in a mail




Every time go to database and collect reports is old fashioned way.Now, Lets do some automated work for oracle workload reports.

This is required when number of databases and also we all lazy to avoid see reports everyday.So it is better to make one schedule who will send us yesterdays all working day awr report in one single mail where just need to see if anything problematic in off -office hours.

Simple and easy way creating one single procedure and schedule by scheduler.

lets do some coding.

1.Create directory for parameters file for awrpt input parameters.

--Report Type,
--Num days
--Begin snapshot id
--End snapshot id
--Reporrt Name

This is the mandatory report parameter needs to generate awr report.

2.Create stored procedure.

--Change directory name 

--Get host_command from one of blog post.

--set report path

CREATE OR REPLACE PROCEDURE SYSTEM.USP_AUTOMATE_AWRRPT
as

v_dbid                         varchar2(20);
v_INSTANCE_NUMBER   varchar2(20);
v_bid                           varchar2(20);
v_eid                           varchar2(20);
v_parfile                       UTL_FILE.file_type;
v_parfile_name              VARCHAR2(50);
v_commands                VARCHAR2(100);
v_location                      VARCHAR2(50);
v_directory                     VARCHAR2(50);

begin

--snapshot details
select DBID,INSTANCE_NUMBER,MIN(SNAP_ID) bid,MAX(SNAP_ID) eid
      INTO v_dbid,v_INSTANCE_NUMBER,v_bid,v_eid
 from
        DBA_HIST_SNAPSHOT
 where
        trunc(BEGIN_INTERVAL_TIME)=trunc(sysdate-1)
 Group by
          DBID,INSTANCE_NUMBER  ;
       
 --Get Directorty location 
Select
        DIRECTORY_NAME,DIRECTORY_PATH INTO v_directory,v_location
from
        dba_directories
where
        DIRECTORY_NAME='DATA_PUMP_DIR';
         
v_parfile_name:='awrrpt_parfilesql';

v_parfile := UTL_FILE.fopen(v_directory, v_parfile_name, 'w', 32767);

v_commands:='define report_type=''html''';
UTL_FILE.PUT_LINE(v_parfile,v_commands );

v_commands:='define num_days=2';
UTL_FILE.PUT_LINE(v_parfile,v_commands );

v_commands:='define begin_snap='||v_bid;
UTL_FILE.PUT_LINE(v_parfile,v_commands );

v_commands:='define begin_snap='||v_eid;
UTL_FILE.PUT_LINE(v_parfile,v_commands );

v_commands:='define report_name=''/tmp/awrrpt_'||'2'||'_'||v_bid||'_'||v_eid||'.html''';
UTL_FILE.PUT_LINE(v_parfile,v_commands );

v_commands:='@?/rdbms/admin/awrpt.sql';

UTL_FILE.PUT_LINE(v_parfile,v_commands );

v_commands:='exit';

UTL_FILE.PUT_LINE(v_parfile,v_commands );
   
UTL_FILE.fclose(v_parfile); 


UTL_FILE.FREMOVE(v_directory,v_parfile_name);


end;
/

4.Create one shell script to execute this generated parfile.

#!/bin/sh
export ORACLE_BASE=/oracle/app/oracle
export ORACLE_HOME=/oracle/app/oracle/product/10.2.0.4/db_1
export ORACLE_SID=pindb
PATH=$PATH:$ORACLE_HOME/bin

sqlplus system/manager <<EOF
execute SYSTEM.USP_AUTOMATE_AWRRPT;
@/hot-bkp/expdp/awrrpt_parfile.sql
EOF
exit


5.Schedule it after 12 AM to get yesterday's record.


Friday, October 4, 2013

Oracle 10g:Silent Mode Upgrade 10.2.0.1.0 to 10.2.0.4.0 into solaris server

Oracle 10g Upgrade 10.2.0.1.0 to 10.2.0.4.0 into solaris.


1.Download file p6810189_10204_Solaris-64.zip for upgrade patch.

2.Copy file to one location and unzip.

mkdir -p /oracle1/10204

unzip p6810189_10204_Solaris-64.zip

3.Create one response.rsp file for input parameter.

Sample file,

Copyright (c) 1999, 2004 Oracle. All rights reserved.          ##
##                                                                ##
## Specify values for the variables listed below to customize     ##
## your installation.                                             ##
##                                                                ##
## Each variable is associated with a comment. The comment        ##
## identifies the variable type.                                  ##
##                                                                ##
## Please specify the values in the following format:             ##
##                                                                ##
##         Type         Example                                   ##
##         String       "Sample Value"                            ##
##         Boolean      True or False                             ##
##         Number       1000                                      ##
##         StringList   {"String value 1","String Value 2"}       ##
##                                                                ##
## The values that are given as <Value Required> need to be       ##
## specified for a silent installation to be successful.          ##
##                                                                ##
##                                                                ##
## This response file is generated by Oracle Software             ##
## Packager.                                                      ##
####################################################################


RESPONSEFILE_VERSION=2.2.1.0.0

#-------------------------------------------------------------------------------
#Name       : UNIX_GROUP_NAME
#Datatype   : String
#Description: Unix group to be set for the inventory directory. Valid only in Unix platforms.
#Example: UNIX_GROUP_NAME = "install"
#-------------------------------------------------------------------------------
UNIX_GROUP_NAME="oinstall"

#-------------------------------------------------------------------------------
#Name       : FROM_LOCATION
#Datatype   : String
#Description: Complete path to the products.xml.
#Example: FROM_LOCATION = "../stage/products.xml"
#-------------------------------------------------------------------------------
FROM_LOCATION="/export/home/oracle/10204/Disk1/stage/products.xml"

#-------------------------------------------------------------------------------
#Name       : FROM_LOCATION_CD_LABEL
#Datatype   : String
#Description: This variable should only be used in multi-CD installations. It includes the label of the compact disk where the file "products.xml" exists. The label can be found in the file "disk.label" in the same directory as products.xml.
#Example: FROM_LOCATION_CD_LABEL = "CD Label"
#-------------------------------------------------------------------------------
FROM_LOCATION_CD_LABEL=<Value Unspecified>

#-------------------------------------------------------------------------------
#Name       : ORACLE_HOME
#Datatype   : String
#Description: Complete path of the Oracle Home.
#Example: ORACLE_HOME = "C:\OHOME1"
#-------------------------------------------------------------------------------
ORACLE_HOME="/oracle"

#-------------------------------------------------------------------------------
#Name       : ORACLE_HOME_NAME
#Datatype   : String
#Description: Oracle Home Name. Used in creating folders and services.
#Example: ORACLE_HOME_NAME = "OHOME1"
#-------------------------------------------------------------------------------
ORACLE_HOME_NAME="OraDb10g_home1"

#-------------------------------------------------------------------------------
#Name       : SHOW_WELCOME_PAGE
#Datatype   : Boolean
#Description: Set to true if the Welcome page in OUI needs to be shown.
#Example: SHOW_WELCOME_PAGE = false
#-------------------------------------------------------------------------------
SHOW_WELCOME_PAGE=true




4.Execute runInstaller to start installation.

Be careful about database and all oracle services not running.

ps -ef|grep oracle


$cd 10204/Disk1

./runInstaller -silent -responseFile /export/home/oracle/10204/Disk1/1.rsp

5.Check Status at the end of completion.If it is successful then installation completed successfully.

The installation of Oracle Database 10g Release 2 Patch Set 3 was successful.

6.Execute root.sh from root user to finally apply changes.

Wednesday, September 18, 2013

Oracle/Linux:RSYNC:Transfer archives from one server to another through rsync utility

RSYNC for Copy files



This is one of the finest method to copy archives from primary server to backup/standby server.

For this, rsync daemon running on primary server.

For Primary Server.

1.Configure rsync.conf

[oracle@Bhavu etc]$ cat /etc/rsyncd.conf

[archlog]
        path = /oracle/archive/bhavudb
        comment = For bhavu database archive log backup
        uid = root
        gid = root
2.Check rsync daemon running.

[oracle@Bhavu etc]$ ps -ef|grep rsync
oracle   13287  3699  0 14:44 pts/1    00:00:00 grep rsync
root     29022     1  0 Aug28 ?        00:00:00 /usr/bin/rsync --daemon

For Backup/Standby Server.

1.Create sh file for  rsync commands.filename is rsync.sh

#!/bin/bash

pro_check=`ps -ef | grep rsync | grep -v grep | grep -v rsync-port`

if [[ -n $pro_check ]] ; then
                echo "Rsync Already Running...."
else
                echo "Starting Rsync...."
                rsync -avzpt primarydatabase_ip::archlog Bachkp_server_location
fi

2.For more info read man rsync.



Tuesday, September 17, 2013

MYSQL:Tuning MYSQL Parameters

Tuning MYSQL Parameters


These are the list of variables need to tune to better performance of MYSQL Database.

This is in alphabetical orders.

1. have_query_cache     
Values YES
Description.
If Query Cache enabled

2. innodb_additional_mem_pool_size
Values:50M
Description:
As per MySQL documentation, this pool is used to store data dictionary information and other internal data structures. If InnoDB runs out of memory on this pool, it starts allocating from OS.
Most of the additional memory pool usage goes to tables in the data dictionary and connections. Normally, with increasing number of tables you may need to allocate more memory to this pool. But as modern OS'es have good and fast memory allocating functions, this variable does not hit performance.

3. innodb_buffer_pool_size 
Values 50-80% of Installed RAM
Description:
The size in bytes of the memory buffer InnoDB uses to cache data and indexes of its tables. The default value is 8MB. The larger you set this value, the less disk I/O is needed to access data in tables. On a dedicated database server, you may set this to up to 80% of the machine physical memory size. However, do not set it too large because competition for physical memory might cause paging in the operating system. Also, the time to initialize the buffer pool is roughly proportional to its size. On large installations, this initialization time may be significant. For example, on a modern Linux x86_64 server, initialization of a 10GB buffer pool takes approximately 6 seconds..

4.  innodb_log_buffer_size   
Values:52428800    
Description:
The MySQL InnoDB log buffer allows transactions to run without having to write the log to disk before the transactions commit. The size of this buffer is configured with the innodb_log_buffer_size variable.

5. innodb_log_file_size 
Values:25% of  innodb_buffer_pool_size 
Description:
Transaction Logs File size

6. innodb_thread_concurrency       
Values:50
Description:
InnoDB tries to keep the number of operating system threads concurrently inside InnoDB less than or equal to the limit given by this variable. Once the number of threads reaches this limit, additional threads are placed into a wait state within a FIFO queue for execution. Threads waiting for locks are not counted in the number of concurrently executing threads.

7. join_buffer_size               
Values:Better to do at session level for large queries
Description:
The minimum size of the buffer that is used for plain index scans, range index scans, and joins that do not use indexes and thus perform full table scans. Normally, the best way to get fast joins is to add indexes. Increase the value of join_buffer_size to get a faster full join when adding indexes is not possible. One join buffer is allocated for each full join between two tables. For a complex join between several tables for which indexes are not used, multiple join buffers might be necessary. There is no gain from setting the buffer larger than required to hold each matching row, and all joins allocate at least the minimum size, so use caution in setting this variable to a large value globally. It is better to keep the global setting small and change to a larger setting only in sessions that are doing large joins. Memory allocation time can cause substantial performance drops if the global size is larger than needed by most queries that use it. 

8. key_buffer_size    
Values:402653184
Description:
Index blocks for MyISAM tables are buffered and are shared by all threads.

9. log_slow_queries       
Values:OFF/ON/PATH
Description:
The slow query log consists of SQL statements that took more than long_query_time seconds to execute and (as of MySQL 5.1.21) required at least min_examined_row_limit rows to be examined. The default value of long_query_time is 10. Beginning with MySQL 5.1.21, the minimum is 0, and the value can be specified to a resolution of microseconds. For logging to a file, times are written including the microseconds part. For logging to tables, only integer times are written; the microseconds part is ignored. Prior to MySQL 5.1.21, the minimum value is 1, and the value for this variable must be an integer. 

10. long_query_time         
Values:10
Description:
If a query takes longer than this many seconds, the server increments the Slow_queries status variable. If the slow query log is enabled, the query is logged to the slow query log file. This value is measured in real time, not CPU time, so a query that is under the threshold on a lightly loaded system might be above the threshold on a heavily loaded one. The default value of long_query_time is 10. Beginning with MySQL 5.1.21, the minimum is 0, and the value can be specified to a resolution of microseconds. For logging to a file, times are written including the microseconds part. For logging to tables, only integer times are written; the microseconds part is ignored. Prior to MySQL 5.1.21, the minimum value is 1, and the value for this variable must be an integer. 

11. max_connections     
Values:250
Description:
The maximum permitted number of simultaneous client connections. By default, this is 100.Increasing this value increases the number of file descriptors that mysqld requires.

12. max_heap_table_size    
Values: 314572800   
Description:
This variable sets the maximum size to which user-created MEMORY tables are permitted to grow. The value of the variable is used to calculate MEMORY table MAX_ROWS values. Setting this variable has no effect on any existing MEMORY table, unless the table is re-created with a statement such as CREATE TABLE or altered with ALTER TABLE or TRUNCATE TABLE. A server restart also sets the maximum size of existing MEMORY tables to the global max_heap_table_size value. 

13. query_cache_limit 
Values:1-10M
Description:
This is the maximum size query (in bytes) that will be cached.

14. query_cache_min_res_unit        
Values:4096
Description:
The minimum size (in bytes) for blocks allocated by the query cache. 

15. query_cache_size         
Values:31457280        
Description:
The amount of memory allocated for caching query results. The default value is 0, which disables the query cache. The permissible values are multiples of 1024; other values are rounded down to the nearest multiple. 

16. read_buffer_size        
Values:131072
Description:
Each thread that does a sequential scan allocates a buffer of this size (in bytes) for each table it scans. If you do many sequential scans, you might want to increase this value, which defaults to 131072. The value of this variable should be a multiple of 4KB. If it is set to a value that is not a multiple of 4KB, its value will be rounded down to the nearest multiple of 4KB. 

17. read_rnd_buffer_size        
Values:15728640
Description:
When reading rows in sorted order following a key-sorting operation, the rows are read through this buffer to avoid disk seeks.  Setting the variable to a large value can improve ORDER BY performance by a lot. However, this is a buffer allocated for each client, so you should not set the global variable to a large value. Instead, change the session variable only from within those clients that need to run large queries. 


18. sort_buffer_size    
Values:15728640
Description:
Each session that needs to do a sort allocates a buffer of this size. sort_buffer_size is not specific to any storage engine and applies in a general manner for optimization. See Section 8.2.1.11, “ORDER BY Optimization”, for example.
If you see many Sort_merge_passes per second in SHOW GLOBAL STATUS output, you can consider increasing the sort_buffer_size value to speed up ORDER BY or GROUP BY operations that cannot be improved with query optimization or improved indexing. The entire buffer is allocated even if it is not all needed, so setting it larger than required globally will slow down most queries that sort. It is best to increase it as a session setting, and only for the sessions that need a larger size. On Linux, there are thresholds of 256KB and 2MB where larger values may significantly slow down memory allocation, so you should consider staying below one of those values. Experiment to find the best value for your workload.


19. table_cache      
Values:512
Description:
The number of open tables for all threads. Increasing this value increases the number of file descriptors that mysqld requires. 

20. thread_cache_size               
Values:8
Description:
How many threads the server should cache for reuse. When a client disconnects, the client's threads are put in the cache if there are fewer than thread_cache_size threads there. Requests for threads are satisfied by reusing threads taken from the cache if possible, and only when the cache is empty is a new thread created. This variable can be increased to improve performance if you have a lot of new connections. Normally, this does not provide a notable performance improvement if you have a good thread implementation. However, if your server sees hundreds of connections per second you should normally set thread_cache_size high enough so that most new connections use cached threads. By examining the difference between the Connections and Threads_created status variables, you can see how efficient the thread cache is. 

21. wait_timeout 
Values:1800
Description:
MySQL would close any connection that was idle for more than  wait_timeout seconds.