Thursday, December 22, 2022

Create file system in LINUX system which is Larger than 2TB

Create file system in LINUX system which is Larger than 2TB



###########################################################################
###########################################################################
Run the following command to install the e2fsprogs and parted :

yum install -y parted
yum install -y e2fsprogs


Perform the following operations to partition and format a data disk larger than 2 TiB in size and mount the file system.


Check whether a data disk exists.

Run the following command:
fdisk -l

A command output similar to the following one is returned. The command output includes information about the data disk. If no data disk information is returned, the instance does not have data disks attached.

Disk /dev/vdb: 3221.2 GB, 3221225472000 bytes, 6291456000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Use the Parted tool to partition the data disk.

Run the following command to start partitioning:
parted /dev/sdb

Run the following command to convert the partition format from MBR to GPT:
mklabel gpt

Run the following command to create a primary partition and specify the start and end sectors for the partition:
mkpart primary 1 100%

where the 100% is total size of the disk allocated, you can choose as required.

Run the following command to check whether the partition is aligned:
align-check optimal 1

A command output similar to the following one is returned:
1 aligned


Note: -  
If 1 not aligned is returned, the partition is not aligned. 
We recommend that you run the following commands 
and use the (<optimal_io_size> + <alignment_offset>)/<physical_block_size> formula to obtain the start sector number to align the partition for optimal performance. 

For example, if the start sector number is 1024, you can then run the mkpart primary 1024s 100% command to create another primary partition.

cat /sys/block/vdb/queue/optimal_io_size
cat /sys/block/vdb/queue/minimum_io_size
cat /sys/block/vdb/alignment_offset
cat /sys/block/vdb/queue/physical_block_size


Run the following command to view the partition table:
print


Run the following command to exit the Parted tool:
quit




The following figure shows the result of partitioning by using the Parted tool.Partitioning by using Parted


Run the following command to enable the system to re-read the partition table:
partprobe


Run one of the following commands to create a file system for the /dev/vdb1 partition.
Run one of the following commands to create a file system based on your needs:

Create an ext4 file system.
mkfs -t ext4 /dev/vdb1




Run the following command to create a mount point named /test:
mkdir /test

Run the following command to mount the /dev/vdb1 partition to /test:
mount /dev/vdb1 /test

Run the following command to view the current disk space and usage:
df -h





If the command output shows the information of the new file system, the mount operation is successful. You can use the new file system.df output

(Recommended) Write the information of the new partition to /etc/fstab to enable this partition to be automatically mounted on system startup.


Run the cp /etc/fstab /etc/fstab.bak command to back up etc/fstab:

Run the following command to write information of the new partition to /etc/fstab:
echo `blkid /dev/vdb1 | awk '{print $2}' | sed 's/\"//g'` /test ext4 defaults 0 0 >> /etc/fstab



Note
You must run the command as the root user. If you are a common user, 
you can run the su - command to switch to the root user, and then run this command. 

Alternatively, you can run the sudo vi /etc/fstab command to edit /etc/fstab.
We recommend that you use a universally unique identifier (UUID) to reference the new partition in /etc/fstab. 
You can run the blkid command to obtain the UUID of the new partition.



Run the following command to check the information of /etc/fstab:
cat /etc/fstab



###########################################################################

###########################################################################


Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist

 Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist




Solution

~~~~~~~~~~~~~~~~~~~~


FROM centos
execute below

cd /etc/yum.repos.d/

sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*

sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*


now try to update or install

yum -y install java






Friday, July 22, 2022

Oracle GoldenGate extract recovery (Missing archivelogs)

Oracle GoldenGate extract recovery (Missing archivelogs)


Issue description: 

    The issue occurs when archive logs that are needed by Oracle GoldenGate extract gets deleted. For example this can happen when purging old archive logs using RMAN with “force” option. 

RMAN is integrated with OGG so under normal circumstances RMAN never deletes an archive log that is required by OGG extract. 

But with “force” option it does delete archive logs required by OGG extract. Although it is certainly not the best practice to use “force” option while purging old archive logs, 

however accident do happen. This post illustrates the steps needed to recover from such situation.


Example:

The error in the Oracle GoldenGate extract report file looks similar to the following example


ERROR   OGG-00868  Error code 1291, error message: ORA-01291: missing logfile

(Missing Log File WAITING FOR DICTIONARY REDO. Read Position SCN: 0.46921993 (46921993)).











1) Find out the archive logs (and the range) that the extract is complaining about. Let’s say the extract is complaining about SCN number 46921993.

Find out which archive log that SCN number belongs to.

SQL> Select sequence# from v$archived_log where 46921993 is between first_change# and next_change#;     Sequence#     ==========     100

SQL> Select sequence# from v$archived_log where sequence# >100 and name is NULL;

(Say the query came back with 10 archive logs (101 through 110)).

Sequence#     ==========      100      101      102      103      104      105      106      107      108      109      110





2) Restore the archive log using RMAN.
RMAN> Restore archive logs from logseq 100 until logseq 110;





















3) Register the archive logs for the capture process.
SQL> Alter database register logical logfile “/<path to logfile>/logfilename> for "OGG Capture Name";

4) Restart the OGG extract.
GGSCI> Start extract <OGG extract name>


Note: This solution is generic and applies to both Oracle cloud and on-premise customers

















Thursday, May 5, 2022

Oracle 19c Restore Point Replication From Primary To Standby

Oracle 19c Restore Point Replication From Primary To Standby

Primary Side:-
~~~~~~~~~~~~

SQL> SELECT database_role, open_mode FROM v$database;

DATABASE_ROLE        OPEN_MODE
—————-               ——————–
PRIMARY                      READ WRITE



SQL> create restore point FB_Restore_poiint  guarantee flashback database;
Restore point created.



SQL> select SCN, GUARANTEE_FLASHBACK_DATABASE, TIME, NAME, REPLICATED from v$restore_point;

SCN     GUARANTEE_FLASHBACK_DATABASE  TIME NAME  REPLICATED
———- —————————— —————————————- ————————-
2443446     YES  20-OCT-19 04.15.22.000000000 PM FB_Restore_poiint NO




PRIMARY - Alertlog
~~~~~~~~~~

2022-11-20T16:15:22.184171+05:30
Created guaranteed restore point FB_Restore_poiint





Standby Side:-
~~~~~~~~~~~~~

SQL> SELECT database_role, open_mode FROM v$database;

DATABASE_ROLE                 OPEN_MODE
—————-                        ——————–
PHYSICAL STANDBY      READ ONLY WITH APPLY



SQL> select SCN, GUARANTEE_FLASHBACK_DATABASE, TIME, NAME, REPLICATED from v$restore_point;

SCN     GUARANTEE_FLASHBACK_DATABASE  TIME NAME  REPLICATED
———- —————————— —————————————- ————————-
2443446 NO  20-OCT-19 04.15.22.000000000 PM FB_Restore_poiint_PRIMARY YES



SQL> select status,instance_name,database_role,protection_mode ,flashback_on from v$database,v$instance;

STATUS INSTANCE_NAME DATABASE_ROLE PROTECTION_MODE  FLASHBACK_ON
———— —————- —————- —————————————————————————-
OPEN               delhi       PHYSICAL STANDBY                         MAXIMUM AVAILABILITY  YES
 
    
        The naming convention for a replicated restore point uses the name of the restore point on the primary database suffixed with _PRIMARY. 
        If a replicated restore point with the same name exists on the standby database, then a replicated restore point is not created. 

    For example, when you create a restore point named PRE_MYTBS on the primary database, the replicated restore point is named FB_Restore_poiint_PRIMARY
    When you delete a restore point on the primary, the corresponding replicated restore point on the standby is also deleted.

 




Automatic Flashback of a Mounted Standby After a Primary RESETLOGS Operation

 

Automatic Flashback of a Mounted Standby After a Primary RESETLOGS Operation

A standby database that is in a mounted state can automatically follow the primary database after a RESETLOGS operation on the primary. 

This simplifies standby management after a RESETLOGS operation on the primary.

When flashback or point-in-time recovery is performed either on a primary database or a PDB in the primary database, the primary database or PDB is moved to a previous point in time and the primary is then opened with the RESETLOGS option.

 A new incarnation of the primary or the PDB in the primary is created. For the standby to automatically follow the primary, the MRP performs the following actions:

  • detects the new incarnation

  • flashes back the standby or the PDB on the standby to the same point in time as that of the primary or the PDB on the primary

  • restarts the standby recovery and moves the standby to the new branch of redo

The flashback operation will succeed only when the standby database has sufficient flashback data.

If you do not want the standby to automatically follow the primary, either keep the standby database in OPEN mode or stop the MRP process on the standby.

If the standby database is open in read only mode, the corresponding error messages are recorded in the alert log. When you restart the MRP after closing the physical standby, the recovery process automatically flashes back the standby database and continues to apply the new branch of redo.




REFER:-

https://docs.oracle.com/en/database/oracle/oracle-database/19/sbydb/managing-oracle-data-guard-physical-standby-databases.html#GUID-252097AC-3070-43B6-88D8-919AE27F97AD




Oracle 19c Restore Point Replication From Primary To Standby

Oracle 19c Restore Point Replication From Primary To Standby   

Oracle Database Release 19c New Feature
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  • The process of flashing back a physical standby to a point in time that was captured on the primary is simplified by automatically replicating restore points from primary to the standby.
  • These restore points are called replicated restore points.
  • Irrespective of whether a restore point on the primary database is a guaranteed restore point or a normal restore point, the corresponding replicated restore point is
    always a normal restore point.

NOTE: Starting from 19c, be aware that standby can automatically perform flashback in response to similar operation on primary. Refer:

Automatic Flashback of a Mounted Standby After a Primary RESETLOGS Operation


Automatically replicates restore points from a primary database to the standby database Conditions:-

  • COMPATIBLE initialization parameter for both the primary database and the standby database is set to 19.0.0 or higher
  • Primary database is open
  • A restore point that is created on a primary database when the primary is in mount mode is not replicated. This restriction is because the restore point information is replicated though the redo.

 

CREATE GUARANTEED RESTORE POINT

1. Stop redo transport and redo apply

a)If broker is not configured:

On primary database:
SQL> alter system set log_archive_dest_state_n='defer'; =====>>>>>replace n with the corresponding number for remote destinations

On standby database:
SQL> alter database recover managed standby database cancel;

b)If broker is in place:

DGMGRL> edit database chicago set state = 'TRANSPORT-OFF';
DGMGRL> edit database boston set state = 'APPLY-OFF';

2. Set GRP in standby database

On standby database:
SQL> CREATE RESTORE POINT grp_dg GUARANTEE FLASHBACK DATABASE;

3. Set GRP in primary database

On primary database:
SQL> CREATE RESTORE POINT grp_dg GUARANTEE FLASHBACK DATABASE;

4. Enable redo transport and redo apply

a)If broker is not configured:

On primary database:
SQL> alter system set log_archive_dest_state_n='enable'; =====>>>>>replace n with the corresponding number for remote destinations

On standby database:
SQL> alter database recover managed standby database using current logfile disconnect;

b)If broker is in place:

DGMGRL> edit database chicago set state = 'TRANSPORT-ON';
DGMGRL> edit database boston set state = 'APPLY-ON';

 

FLASHBACK DATABASE TO GUARANTEED RESTORE POINT

1. Stop redo transport and redo apply

a)If broker is not configured:

On primary database:
SQL> alter system set log_archive_dest_state_n='defer'; =====>>>>>replace n with the corresponding number for remote destinations

On standby database:
SQL> alter database recover managed standby database cancel;

b)If broker is in place:

DGMGRL> edit database chicago set state = 'TRANSPORT-OFF';
DGMGRL> edit database boston set state = 'APPLY-OFF';

2. Shutdown Primary Database and start one instance in mount stage

3. Flashback primary database to restore point

On primary database:
SQL> flashback database to RESTORE POINT grp_dg;
SQL> alter database open resetlogs;

4. Shutdown Standby database and start one instance in mount stage

5. Flashback standby database

On standby database:
SQL> flashback database to RESTORE POINT grp_dg;

6. Enable redo transport and redo apply

a)If broker is not configured:

On primary database:
SQL> alter system set log_archive_dest_state_n='enable'; =====>>>>>replace n with the corresponding number for remote destinations

On standby database:
SQL> alter database recover managed standby database using current logfile disconnect;

b)If broker is in place:

DGMGRL> edit database chicago set state = 'TRANSPORT-ON';
DGMGRL> edit database boston set state = 'APPLY-ON';

7. If Active Data Guard licence is used, open read only the standby database

 

DROP GUARANTEED RESTORE POINT

1. Stop redo transport and redo apply

a)If broker is not configured:

On primary database:
SQL> alter system set log_archive_dest_state_n='defer'; =====>>>>>replace n with the corresponding number for remote destinations

On standby database:
SQL> alter database recover managed standby database cancel;

b)If broker is in place:

DGMGRL> edit database chicago set state = 'TRANSPORT-OFF';
DGMGRL> edit database boston set state = 'APPLY-OFF';

2. Drop GRP in primary database

On primary database:
SQL> drop RESTORE POINT grp_dg;

3. Drop GRP in standby database

Ensure the standby database is in mount stage and drop GRP:

SQL> drop restore point grp_dg;

If Active Data Guard licence is used, open read only the standby database after dropping the GRP

4. Enable redo transport and redo apply

a)If broker is not configured:

On primary database:
SQL> alter system set log_archive_dest_state_n='enable'; =====>>>>>replace n with the corresponding number for remote destinations

On standby database:
SQL> alter database recover managed standby database using current logfile disconnect;

b)If broker is in place:

DGMGRL> edit database chicago set state = 'TRANSPORT-ON';
DGMGRL> edit database boston set state = 'APPLY-ON';






https://docs.oracle.com/en/database/oracle/oracle-database/19/sbydb/managing-oracle-data-guard-physical-standby-databases.html#GUID-252097AC-3070-43B6-88D8-919AE27F97AD


Monday, April 18, 2022

Find High Water Mark in ORACLE database


Find High Water Mark in ORACLE database


Information

~~~~~~~~~~~~

Each table is made up of extents and each extent is made up of oracle blocks – a common block size is 8k. 

When any table creates the high water mark will be in the ‘starting’ position. 

The high water mark keep moving forward as data get saved into the database. 


Scenario - Inserting data move High Water Mark forward, but When you delete data, HWM may becomes empty but high water mark will stay as it is. 


For example:- Suppose you load the table with 1Million rows. Now you will have suppose the high water mark as 1 GB. 

then delete all the rows, now the high water mark will be remain same as 1 GB. 

The only way to reduce the HWM is to rebuild the able or use truncate table. Oracle Truncate table reduce the High water mark.



Why do we need to check HWM ?

The high water mark (HWM) has a function that comes into play with tables that have heavy insert, update and delete activity. 

Every time data is changed in a table, the HWM moves to a setting in each table that shows how much free space is left in terms of blocks used and free in the segment object.

To resize any datafile to reclaim some space on the datafile. Need to check for the High water mark usage and based on that Water mark we may have to resize the respective datafile to get the space reclaimed.

We had an critical space issue on the datawarehouse environment to reclaim the space identified the datafiles using below query and resized the respective datafiles where we can get some space through this process.




STEPS to find the HWM


Step1:Check the current data file size.


SET TERMOUT OFF;

COLUMN current_instance NEW_VALUE current_instance NOPRINT;

SELECT rpad(instance_name, 17) current_instance FROM v$instance;

SET TERMOUT ON;

PROMPT

PROMPT +------------------------------------------------------------------------+

PROMPT | Report   : Data File Report (all physical files)                       |

PROMPT | Instance : &current_instance                                           |

PROMPT +------------------------------------------------------------------------+

SET ECHO        OFF

SET FEEDBACK    6

SET HEADING     ON

SET LINESIZE    180

SET PAGESIZE    50000

SET TERMOUT     ON

SET TIMING      OFF

SET TRIMOUT     ON

SET TRIMSPOOL   ON

SET VERIFY      OFF

CLEAR COLUMNS

CLEAR BREAKS

CLEAR COMPUTES

COLUMN tablespace      FORMAT a35     HEADING 'Tablespace Name / File Class'

COLUMN filename        FORMAT a40        HEADING 'Filename'

COLUMN filesize        FORMAT 9999999999999          HEADING 'File Size MB'

COLUMN autoextensible  FORMAT a4                  HEADING 'Auto'

COLUMN increment_by    FORMAT 999999999999          HEADING 'Next in MB'

COLUMN maxbytes        FORMAT 999999999999          HEADING 'Max Size MB'

BREAK ON report

COMPUTE sum OF filesize  ON report

COMPUTE sum OF maxbytes     ON    report


SELECT /*+ ordered */

   d.tablespace_name                     tablespace

 , d.file_name                           filename

 , d.bytes/1024/1024                      filesize

 , d.autoextensible                      autoextensible

 , (d.increment_by * e.value)/1024/1024  increment_by

 , d.maxbytes/1024/1024                  maxbytes

FROM

   sys.dba_data_files d

 , v$datafile v

 , (SELECT value

    FROM v$parameter

    WHERE name = 'db_block_size') e

WHERE

 (d.file_name = v.name)

UNION

SELECT

   d.tablespace_name   || ' **TEMP**'    tablespace

 , d.file_name                           filename

 , d.bytes/1024/1024                      filesize

 , d.autoextensible                      autoextensible

 , (d.increment_by * e.value)/1024/1024  increment_by

 , d.maxbytes/1024/1024                  maxbytes

FROM

   sys.dba_temp_files d

 , (SELECT value

    FROM v$parameter

    WHERE name = 'db_block_size') e

UNION

SELECT

   '[ ONLINE REDO LOG  ]'

 , a.member

 , b.bytes/1024/1024

 , null

 , TO_NUMBER(null)

 , TO_NUMBER(null)

FROM

   v$logfile a

 , v$log b

WHERE

   a.group# = b.group#

UNION

SELECT

   '[ STANDBY REDO LOG ]'

 , a.member

 , b.bytes/1024/1024

 , null

 , TO_NUMBER(null)

 , TO_NUMBER(null)

FROM

   v$logfile a

 , v$standby_log b

WHERE

   a.group# = b.group#

UNION

SELECT

   '[ CONTROL FILE     ]'

 , a.name

 , TO_NUMBER(null)

 , null

 , TO_NUMBER(null)

 , TO_NUMBER(null)

FROM

   v$controlfile a

ORDER BY 1,2;



Step 2:Will suggest a new size based on HWM.


set lines 180 pages 200

select 'alter database datafile'||' '''||file_name||''''||' resize '||round(highwater+100)||' '||'m'||';' from ( select /*+ rule */

a.tablespace_name,

a.file_name,

a.bytes/1024/1024 file_size_MB,

(b.maximum+c.blocks-1)*d.db_block_size/1024/1024 highwater

from dba_data_files a        ,

(select file_id,max(block_id) maximum

from dba_extents

group by file_id) b,

dba_extents c,

(select value db_block_size

from v$parameter

where name='db_block_size') d

where a.file_id=  b.file_id

and   c.file_id  = b.file_id

and   c.block_id = b.maximum

order by a.tablespace_name,a.file_name);



Step 3:Find High Water Mark in particular Table.


SQL> SET LINESIZE 300

SET SERVEROUTPUT ON

SET VERIFY OFF

DECLARE

CURSOR cu_tables IS

SELECT a.owner,

a.table_name

FROM all_tables a

WHERE a.table_name = Decode(Upper('&&Table_Name'),'ALL',a.table_name,Upper('&&Table_Name'))

AND a.owner = Upper('&&Table_Owner')

AND a.partitioned='NO'

AND a.logging='YES'

order by table_name;

op1 NUMBER;

op2 NUMBER;

op3 NUMBER;

op4 NUMBER;

op5 NUMBER;

op6 NUMBER;

op7 NUMBER;

BEGIN

Dbms_Output.Disable;

Dbms_Output.Enable(1000000);

Dbms_Output.Put_Line('TABLE UNUSED BLOCKS TOTAL BLOCKS HIGH WATER MARK');

Dbms_Output.Put_Line('------------------------------ --------------- --------------- ---------------');

FOR cur_rec IN cu_tables LOOP

Dbms_Space.Unused_Space(cur_rec.owner,cur_rec.table_name,'TABLE',op1,op2,op3,op4,op5,op6,op7);

Dbms_Output.Put_Line(RPad(cur_rec.table_name,30,' ') ||

LPad(op3,15,' ') ||

LPad(op1,15,' ') ||

LPad(Trunc(op1-op3-1),15,' '));

END LOOP;

END;

/




REAL TIME TESTING SAMPLE

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SQL> DELETE FROM emp WHERE eno<=25000;

25000 rows deleted.


Run the script now.


SQL> truncate table emp;

Table truncated.


Run the script again will change the high water mark:




TO RESET HIGH WATER MARK IN BELOW POSSIBLE WAYS:


Export /Import the table

Alter tablespace move

Truncate and insert  table

Analyze the table








Friday, March 25, 2022

MODIFY DB_BLOCK_SIZE in Oracle and create Tablespace with non standard block size.


MODIFY DB_BLOCK_SIZE in Oracle  and create Tablespace with non standard block size.


The basic unit of disk I/O performed by Oracle is an Oracle Block. 

Each Oracle block is a combination of several OS blocks and the number of OS block per Oracle block is OS dependent.


The default size of oracle block is specified by the parameter DB_BLOCK_SIZE which is non modifiable during the life time of the database.


Critical tablespaces such as SYSTEM and SYSAUX will be created with the block size specified by the parameter DB_BLOCK_SIZE

Possible value for 64 bit , the parameter value can be set as  2K, 4K, 8K, 16K and 32K, 

for 32 bit servers the maximum value for this parameter is 16K.


OLTP system typically benefit better by using a 4K or 8K blocks, but need to be decided after studying the data model of the database. 

DSS databases would benefit better with the block sizes of 16K or 32K.


SQL> show parameter db_block_size

NAME                     TYPE    VALUE

------------------------------------ ----------- ------------------------------

db_block_size                integer     8192


This parameter cannot be changed after creation of the database.


You will get error like below 

SQL>

SQL> alter system set db_block_size=16384 scope=spfile;

alter system set db_block_size=16384 scope=spfile

*

ERROR at line 1:

ORA-32017: failure in updating SPFILE

ORA-32016: parameter db_block_size cannot be updated in SPFILE

 

SQL> alter system set db_block_size=16384 scope=memory;

alter system set db_block_size=16384 scope=memory

                 *

ERROR at line 1:

ORA-02095: specified initialization parameter cannot be modified

 

SQL>


If there is a need to create table with non-default block size, it is possible to create it in the same database. 

Oracle database allows upto 4 non standard block size tablespaces.


Before creating a tablespace with non standard block size be sure to allocate new buffer cache space that can accommodate blocks with this size by setting DB_nK_CACHE_SIZE.


For example if the default block size is 8K and you want to create a tablespace with 16K blocks then you need to allocate buffer cache to store 16K blocks using the parameter db_16K_cache_size.

You can follow below steps to create a new tablespace with non-default block size without allocating buffer cache with this block size.


SQL> create tablespace 16K_TBS1 datafile '/u01/16k_TBS/16k_tbs1_datafile01.dbf' size 1g blocksize 16k;

create tablespace tbs_16k datafile '/u01/16k_TBS/16k_tbs1_datafile01.dbf' size 1g blocksize 16k

*

ERROR at line 1:

ORA-29339: tablespace block size 16384 does not match configured block sizes


Let’s set the 16K buffer cache as 50M and try to create the tablespace again


SQL> alter system set db_16k_cache_size=50M scope=both;

System altered.


SQL> create tablespace 16k_TBS1 datafile '/u01/16k_TBS/16k_tbs1_datafile01.dbf' size 1g blocksize 16k;

Tablespace created.


Tablespace has been successfully created,


If you want to change the block size of a certain table you need to create a new tablespace with new required block size and move that table to this newly created tablespace.