Sunday, 31 May 2015

Datawarehousing Concepts

Datawarehousing Concepts

According to Ralph Kimball:

A datawarehouse is a specially designed RDBMS. The data stored in this database should be useful to query the business and analyse the business rather than transaction processing.

According to W.H. Inman:

A datawarehouse is a specially designed RDBMS. The data stored in this database should support 4 characteristic features:
1. Subject Oriented-Datawarehouses are designed as a subject oriented that are used to analyze the business by top level management (or) middle level management (or) for individual departments in an enterprise.
The data in OLTP system is stored in such a way that subject oriented attributes stored in different subject areas( sales rep ID stored in sales schema,Product in Product schema )
2. Integrated--It contains business information collected from various operational data source.
If a particular attribute is common among different source systems which is in different format, has to be loaded in a single standardize format in DWH is called intergration
3.Time Variant- A datawarehouse is atime varient database which allows you to analyze and compare the business with respect to various time periods( Year,Quarter,Month,Week,Day)
4. Non-Volatile-A datawarehouse is a non-volatile database that means once the data entered into dwh can not change.

Dimensional Table: A dimensional table consists of textual representation of the business process( Allows browsing categories quickly and easily)

Fact Tables: A fact table typical includes two types of cols facts cols and foreign keys to dimension. It consists of measurements,metrics or facts of a business process.

Slowly Changing Dimensions:

 Attributes of a dimension that would undergo changes over time. It depends on the business requirement whether particular attribute history of changes should be preserved in the data warehouse. This is called a Slowly Changing Attribute and a dimension containing such an attribute is called a Slowly Changing Dimension.

Rapidly Changing Dimensions:
A dimension attribute that changes frequently is a Rapidly Changing Attribute. If you don’t need to track the changes, the Rapidly Changing Attribute is no problem, but if you do need to track the changes, using a standard Slowly Changing Dimension technique can result in a huge inflation of the size of the dimension. One solution is to move the attribute to its own dimension, with a separate foreign key in the fact table. This new dimension is called a Rapidly Changing Dimension.

Junk Dimensions:
A junk dimension is a single table with a combination of different and unrelated attributes to avoid having a large number of foreign keys in the fact table. Junk dimensions are often created to manage the foreign keys created by Rapidly Changing Dimensions.

Inferred Dimensions:
While loading fact records, a dimension record may not yet be ready. One solution is to generate an surrogate key with Null for all the other attributes. This should technically be called an inferred member, but is often called an inferred dimension.

Conformed Dimensions:
A Dimension that is used in multiple locations is called a conformed dimension. A conformed dimension may be used with multiple fact tables in a single database, or across multiple data marts or data warehouses.

Degenerate Dimensions:
 A degenerate dimension is when the dimension attribute is stored as part of fact table, and not in a separate dimension table. These are essentially dimension keys for which there are no other attributes. In a data warehouse, these are often used as the result of a drill through query to analyze the source of an aggregated number in a report. You can use these values to trace back to transactions in the OLTP system.

Role Playing Dimensions:
A role-playing dimension is one where the same dimension key — along with its associated attributes — can be joined to more than one foreign key in the fact table. For example, a fact table may include foreign keys for both Ship Date and Delivery Date. But the same date dimension attributes apply to each foreign key, so you can join the same dimension table to both foreign keys. Here the date dimension is taking multiple roles to map ship date as well as delivery date, and hence the name of Role Playing dimension.

Shrunken Dimensions:
A shrunken dimension is a subset of another dimension. For example, the Orders fact table may include a foreign key for Product, but the Target fact table may include a foreign key only for ProductCategory, which is in the Product table, but much less granular. Creating a smaller dimension table, with ProductCategory as its primary key, is one way of dealing with this situation of heterogeneous grain. If the Product dimension is snowflaked, there is probably already a separate table for ProductCategory, which can serve as the Shrunken Dimension.

Static Dimensions:
Static dimensions are not extracted from the original data source, but are created within the context of the data warehouse. A static dimension can be loaded manually — for example with Status codes — or it can be generated by a procedure, such as a Date or Time dimension.

Types of Facts -

Additive:
Additive facts are facts that can be summed up through all of the dimensions in the fact table. A sales fact is a good example for additive fact.
Semi-Additive:
Semi-additive facts are facts that can be summed up for some of the dimensions in the fact table, but not the others.
Eg: Daily balances fact can be summed up through the customers dimension but not through the time dimension.
Non-Additive:
Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table.
Eg: Facts which have percentages, ratios calculated.

Factless Fact Table:
 In the real world, it is possible to have a fact table that contains no measures or facts. These tables are called “Factless Fact tables”.
Eg: A fact table which has only product key and date key is a factless fact. There are no measures in this table. But still you can get the number products sold over a period of time.
Based on the above classifications, fact tables are categorized into two:
Cumulative:
This type of fact table describes what has happened over a period of time. For example, this fact table may describe the total sales by product by store by day. The facts for this type of fact tables are mostly additive facts. The first example presented here is a cumulative fact table.
Snapshot:
This type of fact table describes the state of things in a particular instance of time, and usually includes more semi-additive and non-additive facts. The second example presented here is a snapshot fact table.


DATA MINING

Data mining is the process of finding patterns from large data sets and analyzing data from different perspectives. It allows business users to analyze data from different angles and summarize the relationships identified. Data mining can be useful in increasing the revenue and cut costs.

Example:

In a supermarket, the persons who bought the tooth brush on Sundays also bought tooth paste. This information can be used in increasing the revenue by providing an offer on tooth brush and tooth paste. There by selling more number of products (tooth paste and tooth brush) on Sundays.

Data mining process:

Data mining analyzes relationships and patterns in the stored data based on user queries. Data mining involves four tasks.
  • Association: Find the relationship between the variables. For example in retail a store, we can determine which products are bought together frequently and this information can be used to market these products.
  • Clustering: Identifying the logical relationship in the data items and grouping them. For example in a retail store, a tooth paste, tooth brush can be logically grouped.
  • Classifying: Involves in applying a known pattern to the new data.

Friday, 29 May 2015

Datastage Parallel Processing & Partition Techniques :

  The simultaneous use of more than one CPU or processor core to execute a program or multiple computational threads is called parallel processing or Parallelism. Ideally, parallel processing makes programs run faster because there are more engines (CPUs or Cores) running it. as you all know Datastage supports 2 types of parallelism.

1.Pipeline parallelism.
2.Partition parallelism.


Pipeline Parallelism :

     As and when a row/set of rows is/are processed at a particular stage that record or rows is sent out to process at another stage for processing or storing. Below image explains the same in detail.


We have set of rows in source and 1k rows being read in a single segment,When ever those rows got processed at Transform,those are being sent to ENRICH and From there to LOAD ,so By this way we can keep processor busy and reduce disk usage for staging.


Partition Parallelism :

              Partition Parallel depends on dividing large data into smaller subsets (partitions) across resources ,Goal is to evenly distribute data,some transforms require all data within same group to be in same partition Requires the same transform on all partitions.

            Using partition parallelism the same job would effectively be run simultaneously by several processors, each handling a separate subset of the total data, but Each partition is independent of others, there is no concept of “global” state.



Datastage combines both Partition and Pipeline parallelism together to implement ETL Solutions.





Partition techniques are Key based And Key less Techniques 


Key based Techniques are 

a) Hash 

b) Modulus 

c) Range 

d) DB2 

Key Less Techniques are 


a) Same 

b) Entire 

c) Round Robin 

Performnace tuning

1. Turn off Runtime Column propagation wherever it’s not required.
2.Make use of Modify, Filter, and Aggregation, Col. Generator etc stages instead of Transformer stage only if the anticipated volumes are high and performance becomes a problem. Otherwise use Transformer. Its very easy to code a transformer than a modify stage.
3. Avoid propagation of unnecessary metadata between the stagesUse Modify stage and drop the metadata. Modify stage will drop the metadata only when explicitey specified using DROP clause.
4. One of the most important mistake that developers often make is not to have a volumetric analyses done before you decide to use Join or Lookup or Merge stages. Estimate the volumes and then decide which stage to go for.
5.Add reject files wherever you need reprocessing of rejected records or you think considerable data loss may happen. Try to keep reject file at least at Sequential file stages and writing to Database stages.
6.Make use of Order By clause when a DB stage is being used in join. The intention is to make use of Database power for sorting instead of datastage reources. Keep the join partitioning as Auto. Indicate don’t sort option between DB stage and join stage using sort stage when using order by clause.
7. While doing Outer joins, you can make use of Dummy variables for just Null checking instead of fetching an explicit column from table.
8. Use Sort stages instead of Remove duplicate stages. Sort stage has got more grouping options and sort indicator options.
9. One of the most frequent mistakes that developers face is lookup failures by not taking care of String padchar that datastage appends when converting strings of lower precision to higher precision.Try to decide on the APT_STRING_PADCHAR, APT_CONFIG_FILE parameters from the beginning. Ideally APT_STRING_PADCHAR should be set to OxOO (C/C++ end of string) and Configuration file to the maximum number of nodes available.
10. Data Partitioning is very important part of Parallel job design. It’s always advisable to have the data partitioning as ‘Auto’ unless you are comfortable with partitioning, since all DataStage stages are designed to perform in the required  way with Auto partitioning.
11.Do remember that Modify drops the Metadata only when it is explicitly asked to do so using KEEP/DROP clauses.

Sunday, 3 November 2013

Datastage Errors and Resolution


You may get many errors in datastage while compiling the jobs or running the jobs. 

Some of the errors are as follows 

a)Source file not found. 
If you are trying to read the file, which was not there with that name. 

b)Some times you may get Fatal Errors. 

c) Data type mismatches. 
This will occur when data type mismaches occurs in the jobs. 

d) Field Size errors. 

e) Meta data Mismach 

f) Data type size between source and target different 

g) Column Mismatch 

i) Pricess time out. 
If server is busy. This error will come some time. 

Some of the errors in detail:

ds_Trailer_Rec: When checking operator: When binding output schema variable "outRec": When binding output interface field "TrailerDetailRecCount" to field "TrailerDetailRecCount": Implicit conversion from source type "ustring" to result type "string[max=255]": Possible truncation of variable length ustring when converting to string using codepage ISO-8859-1.

Solution:I resolved changing the extended col under meta data of the transformer to unicode

When checking operator: A sequential operator cannot preserve the partitioning
 of the parallel data set on input port 0.

Solution:I resolved by changing the preserve partioning to 'clear' under transformer properties


Syntax error: Error in "group"  operator: Error in output redirection: Error in output parameters: Error in modify adapter: Error in binding: Could not find type: "subrec", line 35

Solution:Its the issue of level number of those columns which were being added in transformer. Their level number was blank and the columns that were being taken from cff file had it as 02. Added the level number and job worked.

Out_Trailer: When checking operator: When binding output schema variable "outRec": When binding output interface field "STDCA_TRLR_REC_CNT" to field "STDCA_TRLR_REC_CNT": Implicit conversion from source type "dfloat" to result type "decimal[10,0]": Possible range/precision limitation.



CE_Trailer: When checking operator: When binding output interface field "Data" to field "Data": Implicit conversion from source type "string" to result type "string[max=500]": Possible truncation of variable length string.



Implicit conversion from source type "dfloat" to result type "decimal[10,0]": Possible range/precision limitation.


Solution: Used to transformer function'
DFloatToDecimal'. As target field is Decimal. By default the output from aggregator output is double, getting the above by using above function able to resolve the warning.




When binding output schema variable "outputData": When binding output interface field "RecordCount" to field "RecordCount": Implicit conversion from source type "string[max=255]" to result type "int16": Converting string to number.





Problem(Abstract)
Jobs that process a large amount of data in a column can abort with this error:
the record is too big to fit in a block; the length requested is: xxxx, the max block length is: xxxx.
Resolving the problem
To fix this error you need to increase the block size to accommodate the record size:
1.             Log into Designer and open the job.
2.             Open the job properties--> parameters-->add environment variable and select: APT_DEFAULT_TRANSPORT_BLOCK_SIZE
3.             You can set this up to 256MB but you really shouldn't need to go over 1MB.
NOTE: value is in KB

For example to set the value to 1MB:
APT_DEFAULT_TRANSPORT_BLOCK_SIZE=1048576

The default for this value is 128kb.

When setting APT_DEFAULT_TRANSPORT_BLOCK_SIZE you want to use the smallest possible value since this value will be used for all links in the job.

For example if your job fails with APT_DEFAULT_TRANSPORT_BLOCK_SIZE set to 1 MB and succeeds at 4 MB you would want to do further testing to see what it the smallest value between 1 MB and 4 MB that will allow the job to run and use that value. Using 4 MB could cause the job to use more memory than needed since all the links would use a 4 MB transport block size.

NOTE: If this error appears for a dataset use APT_PHYSICAL_DATASET_BLOCK_SIZE.



.      While connecting “Remote Desktop”, Terminal server has been exceeded maximum number of allowed connections

SOL:   In Command Prompt,  type mstsc /v: ip address of server /admin

           OR                                   mstsc /v: ip address  /console



2.    SQL20521N. Error occurred processing a conditional compilation directive near string. Reason code=rc.
      Following link has issue description:

http://pic.dhe.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.messages.sql.doc%2Fdoc%2Fmsql20521n.html



3.      SK_RETAILER_GROUP_BRDIGE,1: runLocally() did not reach EOF on its input data set 0.

SOL:   Warning will be disappeared by regenerating SK File.



4.      While connecting to Datastage client, there is no response, and while restarting websphere services, following errors occurred

[root@poluloro01 bin]# ./stopServer.sh  server1 -user wasadmin -password Wasadmin0708

ADMU0116I: Tool information is being logged in file

          /opt/ibm/WebSphere/AppServer/profiles/default/logs/server1/stopServer.log

ADMU0128I: Starting tool with the default profile

ADMU3100I: Reading configuration for server: server1

ADMU0111E: Program exiting with error: javax.management.JMRuntimeException:

           ADMN0022E: Access is denied for the stop operation on Server MBean

           because of insufficient or empty credentials.

ADMU4113E: Verify that username and password information is on the command line

           (-username and -password) or in the <conntype>.client.props file.

ADMU1211I: To obtain a full trace of the failure, use the -trace option.

ADMU0211I: Error details may be seen in the file:

          /opt/ibm/WebSphere/AppServer/profiles/default/logs/server1/stopServer.log

 

SOL:    Wasadmin and XMeta passwords needs to be reset and commands are below..

            [root@poluloro01 bin]# cd /opt/ibm/InformationServer/ASBServer/bin/

[root@poluloro01 bin]# ./AppServerAdmin.sh -was -user wasadmin

-password Wasadmin0708

Info WAS instance /Node:poluloro01/Server:server1/ updated with new user information

Info MetadataServer daemon script updated with new user information

[root@poluloro01 bin]# ./AppServerAdmin.sh -was -user xmeta -password Xmeta0708

Info WAS instance /Node:poluloro01/Server:server1/ updated with new user information

Info MetadataServer daemon script updated with new user information



5.      “The specified field doesn’t exist in view adapted schema”



SOL:   Most of the time "The specified field: XXXXXX does not exist in the view adapted schema" occurred when we missed a field to map. Every stage has got an output tab if used in the between of the job. Make sure you have mapped every single field required for the next stage.



Sometime even after mapping the fields this error can be occurred and one of the reason could be that the view adapter has not linked the input and output fields. Hence in this case the required field mapping should be dropped and recreated.



Just to give an insight on this, the view adapter is an operator which is responsible for mapping the input and output fields. Hence DataStage creates an instance of APT_ViewAdapter which translate the components of the operator input interface schema to matching components of the interface schema. So if the interface schema is not having the same columns as operator input interface schema then this error will be reported.



1)When we use same partitioning in datastage transformer stage we get the following warning in 7.5.2 version.

TFCP000043      2       3       input_tfm: Input dataset 0 has a partitioning method other than entire specified; disabling memory sharing.

This is known issue and you can safely demote that warning into informational by adding this warning to Project specific message handler.

2) Warning: A sequential operator cannot preserve the partitioning of input data set on input port 0

Resolution: Clear the preserve partition flag before Sequential file stages.

3)DataStage parallel job fails with fork() failed, Resource temporarily unavailable

On aix execute following command to check maxuproc setting and increase it if you plan to run multiple jobs at the same time.

lsattr -E -l sys0 | grep maxuproc
maxuproc        1024               Maximum number of PROCESSES allowed per user      True

4)TFIP000000              3       Agg_stg: When checking operator: When binding input interface field “CUST_ACT_NBR” to field “CUST_ACT_NBR”: Implicit conversion from source type “string[5]” to result type “dfloat”: Converting string to number.

Resolution: use the Modify stage explicitly convert the data type before sending to aggregator stage.

5)Warning: A user defined sort operator does not satisfy the requirements.

Resolution:check the order of sorting columns and make sure use the same order when use join stage after sort to joing two inputs.

6)TFTM000000      2       3      Stg_tfm_header,1: Conversion error calling conversion routine timestamp_from_string data may have been lost

TFTM000000              1       xfmJournals,1: Conversion error calling conversion routine decimal_from_string data may have been lost

Resolution:check for the correct date format or decimal format and also null values in the date or decimal fields before passing to datastage StringToDate, DateToString,DecimalToString or StringToDecimal functions.

7)TOSO000119      2       3      Join_sort: When checking operator: Data claims to already be sorted on the specified keys the ‘sorted’ option can be used to confirm this. Data will be resorted as necessary. Performance may improve if this sort is removed from the flow

Resolution: Sort the data before sending to join stage and check for the order of sorting keys and join keys and make sure both are in the same order.

8)TFOR000000      2       1       Join_Outer: When checking operator: Dropping component “CUST_NBR” because of a prior component with the same name.

Resolution:If you are using join,diff,merge or comp stages make sure both links have the differnt column names other than key columns

9)TFIP000022              1       oci_oracle_source: When checking operator: When binding output interface field “MEMBER_NAME” to field “MEMBER_NAME”: Converting a nullable source to a non-nullable result;

Resolution:If you are reading from oracle database or in any processing stage where incoming column is defined as nullable and if you define metadata in datastage as non-nullable then you will get above issue.if you want to convert a nullable field to non  nullable make sure you apply available null functions in datastage or in the extract query.



DATASTAGE COMMON ERRORS/WARNINGS AND SOLUTIONS – 2

1. No jobs or logs showing in IBM DataStage Director Client, however jobs are still accessible from the Designer Client.

SOL:   SyncProject cmd that is installed with DataStage 8.5 can be run to analyze and recover projects

SyncProject -ISFile islogin -project dstage3 dstage5 –Fix

2.  CASHOUT_DTL: Invalid property value /Connection/Database (CC_StringProperty::getValue, file CC_StringProperty.cpp, line 104)

SOL: Change the Data Connection properties manually in the produced

DB2 Connector stage.

A patch fix is available for this issue JR35643

3. Import .dsx file from command line

SOL: DSXImportService -ISFile dataconnection –DSProject dstage –DSXFile c:\export\oldproject.dsx

4. Generate Surrogate Key without Surrogate Key Stage

SOL:   @PARTITIONNUM + (@NUMPARTITIONS * (@INROWNUM – 1)) + 1

Use above Formula in Transformer stage to generate a surrogate key.

5. Failed to authenticate the current user against the selected Domain: Could not connect to server.

RC: Client has invalid entry in host file

Server listening port might be blocked by a firewall

Server is down

SOL:   Update the host file on client system so that the server hostname can be resolved from client.

Make sure the WebSphere TCP/IP ports are opened by the firewall.

Make sure the WebSphere application server is running. (OR)

Restart Websphere services.

6. The connection was refused or the RPC daemon is not running (81016)

RC: The dsprcd process must be running in order to be able to login to DataStage.

If you restart DataStage, but the socket used by the dsrpcd (default is 31538) was busy, the dsrpcd will fail to start. The socket may be held by dsapi_slave processes that were still running or recently killed when DataStage was restarted.

SOL: Run “ps -ef | grep dsrpcd” to confirm the dsrpcd process is not running.

Run “ps -ef | grep dsapi_slave” to check if any dsapi_slave processes exist. If so, kill them.

Run “netstat -a | grep dsprc” to see if any processes have sockets that are ESTABLISHED, FIN_WAIT, or CLOSE_WAIT. These will prevent the dsprcd from starting. The sockets with status FIN_WAIT or CLOSE_WAIT will eventually time out and disappear, allowing you to restart DataStage.

Then Restart DSEngine.       (if above doesn’t work) Needs to reboot the system.

7. To save Datastage logs in notepad or readable format

SOL:   a) /opt/ibm/InformationServer/server/DSEngine  (go to this directory)

./bin/dsjob  -logdetail project_name job_name >/home/dsadm/log.txt

b) In director client, Project tab à Print à select print to file option and save it in local directory.

8. “Run time error ’457′. This Key is already associated with an element of this collection.”

SOL:   Needs to rebuild repository objects.

a)     Login to the Administrator client

b)     Select the project

c)      Click on Command

d)     Issue the command ds.tools

e)     Select option ‘2’

f)       Keep clicking next until it finishes.

g)     All objects will be updated.

9. To stop the datastage jobs in linux level

SOL:   ps –ef   |  grep dsadm

To Check process id and phantom jobs

Kill -9 process_id

10. To run datastage jobs from command line

SOL:   cd  /opt/ibm/InformationServer/server/DSEngine

./dsjob  -server $server_nm   -user  $user_nm   -password   $pwd  -run $project_nm $job_nm

11. Failed to connect to JobMonApp on port 13401.

SOL:   needs to restart jobmoninit script (in /opt/ibm/InformationServer/Server/PXEngine/Java)

Type    sh  jobmoninit  start $APT_ORCHHOME

Add 127.0.0.1 local host in /etc/hosts file

(Without local entry, Job monitor will be unable to use the ports correctly)

12. SQL0752N. Connect to a database is not permitted within logical unit of work CONNECT type 1 settings is in use.

SOL: COMMIT or ROLLBACK statement before requesting connection to another database.

1.     While running ./NodeAgents.sh start command… getting the following error: “LoggingAgent.sh process stopped unexpectedly”

SOL:   needs to kill LoggingAgentSocketImpl

              Ps –ef |  grep  LoggingAgentSocketImpl   (OR)

              PS –ef |               grep Agent  (to check the process id of the above)

2.     Warning: A sequential operator cannot preserve the partitioning of input data set on input port 0

SOL:    Clear the preserve partition flag before Sequential file stages.

3.     Warning: A user defined sort operator does not satisfy the requirements.

SOL:   Check the order of sorting columns and make sure use the same order when use join stage after sort to joing two inputs.

4.     Conversion error calling conversion routine timestamp_from_string data may have been lost. xfmJournals,1: Conversion error calling conversion routine decimal_from_string data may have been lost

SOL:    check for the correct date format or decimal format and also null values in the date or decimal fields before passing to datastage StringToDate, DateToString,DecimalToString or StringToDecimal functions.

5.     To display all the jobs in command line

SOL:  

cd /opt/ibm/InformationServer/Server/DSEngine/bin

./dsjob -ljobs <project_name>

6.     “Error trying to query dsadm[]. There might be an issue in database server”

SOL:   Check XMETA connectivity.

db2 connect to xmeta (A connection to or activation of database “xmeta” cannot be made because of  BACKUP pending)

7.      “DSR_ADMIN: Unable to find the new project location”

SOL:   Template.ini file might be missing in /opt/ibm/InformationServer/Server.

           Copy the file from another severs.

8.      “Designer LOCKS UP while trying to open any stage”

SOL:   Double click on the stage that locks up datastage

           Press ALT+SPACE

           Windows menu will popup and select Restore

           It will show your properties window now

           Click on “X” to close this window.

           Now, double click again and try whether properties window appears.

9.      “Error Setting up internal communications (fifo RT_SCTEMP/job_name.fifo)

SOL:   Remove the locks and try to run (OR)

          Restart DSEngine and try to run (OR)

Go to /opt/ibm/InformationServer/server/Projects/proj_name/

            ls RT_SCT* then

            rm –f  RT_SCTEMP

            then try to restart it.

10.      While attempting to compile job,  “failed to invoke GenRunTime using Phantom process helper”

RC:     /tmp space might be full

           Job status is incorrect

           Format problems with projects uvodbc.config file

SOL:      a)        clean up /tmp directory

              b)        DS Director à JOB à clear status file

              c)         confirm uvodbc.config has the following entry/format:

                       [ODBC SOURCES]

                       <local uv>

                       DBMSTYPE = UNIVERSE

                       Network  = TCP/IP

                       Service =  uvserver

                       Host = 127.0.0.1

ERROR:Phantom error in jobs



Resolution – Datastage Services have to be started

So follow the following steps.

Login to server through putty using dsadm user.



Check whether active or stale sessions are there.

ps –ef|grep slave



Ask the application team to close the active or stale sessions running from application’s user.

If they have closed the sessions, but sessions are still there, then kill those sessions.



Make sure no jobs are running

If any, ask the application team to stop the job

ps –ef|grep dsd.run



Check for output for below command before stopping Datastage services.

netstat –a|grep dsrpc

If any processes are in established, check any job or stale or active or osh sessions are not running.

If any processes are in close_wait, then wait for some time, those processes

will not be visible.



Stop the Datastage services.

cd $DSHOME

./dsenv

cd $DSHOME/bin

./uv –admin –stop

Check whether Datastage services are stopped.

       netstat –a|grep dsrpc

      No output should come for above command.



Wait for 10 to 15 min for shared memory to be released by process holding them.

Start the Datastage services.

 ./uv –admin –start

If asking for dsadm password while firing the command , then enable impersonation.through root user

              ${DSHOME}/scripts/DSEnable_impersonation.sh


IBM InfoSphere DataStage

IBM InfoSphere DataStage  is year  ETL  tool and part of the IBM Information Platforms Solutions suite and  IBM InfoSphere . It uses a graphical notation to construct data integration solutions and is available in various versions Such as the Server Edition and the Enterprise Edition.
Originated at VMark DataStage, a spin off from  Prime Computers  That Notable developed two products:  UniVerse database  and the DataStage ETL tool. The first VMark ETL prototype was built by Lee Scheffler in the first half of 1996. Peter Weyman was VMark VP of Strategy and Identified the ETL market as Opportunity year. He Announced Orchestrated's parallel processing capabilities integrated into the DataStageXE Directly platform. In March 2005  IBM  Ascential Software Acquired  and made ​​DataStage part of the WebSphere family as WebSphere DataStage. In 2006 the product was released as part of the IBM Information Server under the Information Management family leg was still known as WebSphere DataStage. In 2008 the suite was renamed to InfoSphere Information Server and the product was renamed to InfoSphere appointed Lee Scheffler as the architect and conceived the product brand name "Stage" to Signify modularity and component-orientation. This tag was used to name DataStage and subsequently used in related products QualityStage, ProfileStage, MetaStage and AuditStage. Lee Scheffler Presented the DataStage product overview to the board of VMark in June 1996 and it was APPROVED for development. The product was in alpha testing in October, beta testing in November and was Generally available in January 1997.
Acquired VMark  UniData  in October 1997 and renamed Itself to  Ardent Software .In 1999 Ardent Software was Acquired by  Informix  the database software vendor. In April 2001  IBM  Acquired  Informix and took just the database business leaving the data integration tools to be mean year off as an independent software company Called Ascential Software . In November 2001, Ascential Software Corp.. of Westboro, Mass.. Acquired privately held Torrent Systems Inc.. of Cambridge, Mass. for $ 46 million in cash. Commitment to the Ascential DataStage .
§  Enterprise Edition (PX) Give a name to the version of DataStage That had a parallel processing architecture and parallel ETL jobs.
§  Server Edition: the name of the original version of DataStage representing Server Jobs. Early DataStage versions only contained Server Jobs. May DataStage and DataStage Jobs added Sequence 6 Parallel Jobs added via Enterprise Edition.
§  MVS Edition:  mainframe  jobs, developed on a Windows or Unix / Linux platform and Transferred to the mainframe as compiled mainframe jobs.
§  DataStage for PeopleSoft: a server edition with prebuilt  PeopleSoft  EPM jobs under year  OEM  arragement with  PeopleSoft  and  Oracle Corporation .
§  DataStage TX: for processing complex transactions and messages, formerly known as Mercator. Now known as WebSphere Transformation Extender.
§  ISD (Information Services Director, ex. DataStage RTI): Real Time Integration pack CAN turn server or parallel jobs into  SOA  services.
InfoSphere DataStage is a powerful data integration tool. It was Acquired by IBM in 2005 and has become a part of IBM Information Server Platform. It uses a client / server design and Administered WHERE jobs are created via a Windows client against central repository on the server. The IBM InfoSphere DataStage is capable of Integrating Data on demand across multiple and high volumes of data sources and target applications using a high performance parallel framework. InfoSphere DataStage Also facilitates extended metadata management and enterprise connectivity


IBM InfoSphere DataStage Features & Benefits


These IBM WebSphere DataStage provides unique capabilities: The Most Powerful ETL Solution - Supports the collection, integration and transformation of high volumes of time, with data structures ranging from simple to highly complex. Manages DataStage on arriving in real-time as well as date Received daily, weekly or monthly.  

The Most Scalable Platform
Enables companies to solve large-scale business problems through high-performance processing of massive volumes date. By leveraging the parallel processing capabilities of multiprocessor hardware platforms, CAN DataStage Enterprise Edition scales to Satisfy the demands of ever-growing volumes and stringent real-time Requirements and ever shrinking batch time windows  

The Most Comprehensive Source and Target Support
Supports a virtually unlimited number of heterogeneous data sources and targets in a single job, Including: text files, complex data structures in XML, ERP systems Such as SAP and PeopleSoft; Almost any database (including partitioned databases), web services, and SAS.  

Real-time Data Integration Support
Operates in real-time capturing messages from Message Oriented Middleware (MOM) queues using JMS or adapters to seamlessly combine WebSphereMQ date into conforming operational and historical analysis perspectives. IBM WebSphere RTI Services is a service-oriented architecture (SOA) middleware That brokers Enabling the enterprise-wide benefits of the Ascential Enterprise Integration Suite across the continuum of time constraints, application suites, interface protocols and integration technologies.  

Advanced Maintenance and Development
Gives developers maximum speed, flexibility and Effectiveness in Building, Deploying, updating and managing Their data integration infrastructure. Full data integration Reduces the development and maintenance cycle for data integration projects by Simplifying administration and maximizing development resources.  

Complete Connectivity Between Any Data Source and Any Application
Ensures That the most relevant, complete and accurate time is integrated and used by the most popular enterprise application Software Brands, Including SAP, Siebel, Oracle, PeopleSoft and JD Edwards.