a short clip of my sky dive- hope it encourages others to take the plunge!
Sunday, 18 September 2011
Wednesday, 7 September 2011
DBS CRASHED OR SESSIONS RESET. RECOVERY IN PROGRESS or WSA E HOST UNREACH
This seems to be a fairly common stumbling block for Teradata users. I understand that most (if not all) applications connecting to Teradata interface using TPA or Trusted Parallel Application. I do not know if there any desktop applications which don't, so here goes a summary of the problem and a probable solution.
You are able to ping the Teradata node. When you try to login using BTEQ or SQL Assistant, you get an error message. For BTEQ the error message is this:
Warning: DBS CRASHED OR SESSIONS RESET. RECOVERY IN PROGRESS
For SQL Assistant, its WSA E HOST UNREACH (see the earlier blog for screenshot)
The simple solution might be start the TPA daemon on the server.
/etc/init.d/tpa start
Here's the full text:
(none):~ # /etc/init.d/tpa stop
PDE stopped for TPA shutdown
(none):~ # netstat -an | grep 1025
(none):~ # bteq
Teradata BTEQ 12.00.00.00 for LINUX.
Copyright 1984-2007, NCR Corporation. ALL RIGHTS RESERVED.
Enter your logon or BTEQ command:
.logon suse/partha
.logon suse/partha
Password:
*** Warning: DBS CRASHED OR SESSIONS RESET. RECOVERY IN PROGRESS>
*** Warning: Exiting because of three BREAKs!
*** Exiting BTEQ...
*** RC (return code) = 2
(none):~ # netstat -an | grep 1025
(none):~ # /etc/init.d/tpa start
Teradata Database Initiator service is starting...
Teradata Database Initiator service started successfully.
(none):~ # netstat -an | grep 1025
(none):~ # bteq
Teradata BTEQ 12.00.00.00 for LINUX.
Copyright 1984-2007, NCR Corporation. ALL RIGHTS RESERVED.
Enter your logon or BTEQ command:
.logon suse/partha
.logon suse/partha
Password:
*** Logon successfully completed.
*** Teradata Database Release is 12.00.03.12
*** Teradata Database Version is 12.00.03.14
*** Transaction Semantics are BTET.
*** Character Set Name is 'ASCII'.
*** Total elapsed time was 9 seconds.
BTEQ -- Enter your DBC/SQL request or BTEQ command:
select current_date;
select current_date;
*** Query completed. One row found. One column returned.
*** Total elapsed time was 2 seconds.
Date
--------
11/09/07
BTEQ -- Enter your DBC/SQL request or BTEQ command:
.quit
.quit
*** You are now logged off from the DBC.
*** Exiting BTEQ...
*** RC (return code) = 0
(none):~ # netstat -an | grep 1025
tcp 0 0 0.0.0.0:1025 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:42528 127.0.0.1:1025 TIME_WAIT
tcp 0 0 127.0.0.1:42527 127.0.0.1:1025 TIME_WAIT
(none):~ # netstat -an | grep 1025
tcp 0 0 0.0.0.0:1025 0.0.0.0:* LISTEN
(none):~ #
and here's the screenshot:
Tuesday, 6 September 2011
don't get caught due to a missing cop1
Most Teradata clients expect to find the Teradata servername in a specialized form in the hosts file. This is /etc/hosts in Linux and <OSDRIVE>\Windows\System32\drivers\etc\hosts in Windows. Turns out that some tools like Teradata SQL Assistant can't locate the server even if you put the full IP address in the ODBC connection.
The client shows an error like this:
0: WSA E HostUnreach: The Teradata server can't currently be reached over this network.
The simplest solution is to add an entry in the hosts file (typical location given above) with the Teradata node name followed by cop1 (or cop2 etc). Here's how it should look now:
# comment
# lines
# lines
# here
192.168.93.128 suse susecop1
Note that I have added two entries. While suse is actually the name of the remote server, I have appended a cop1 in the next one for Teradata tools to figure out this is a Teradata server!
Thanks to Jim Chapman for explaining this in this post.
If you are not able to logon using BTEQ or Teradata SQL Assistant, there might be a problem with the Trusted Parallel Application (TPA) daemon. More on that in the next entry.
If you are not able to logon using BTEQ or Teradata SQL Assistant, there might be a problem with the Trusted Parallel Application (TPA) daemon. More on that in the next entry.
Wednesday, 29 June 2011
Phir bhi dil hai hindustani Or Some common mistakes people from the subcontinent make
We Indians tend to have a desi version of most things on life - Indian western culture, Indian western English, Indian western attires and so on. As in others, our Contribution to the English language is immense, so much so, that we often invent new words when we strongly the feel that the word should have existed. For the uninitiated, though, this causes confusion. While I stake no claim on expertise on the langauage here are a few commom mistakes:
Developer (to American/British PM): Few activities are pending
What is implied: A few of the activites are pending
What the PM understands: Almost nothing is pending. A few and few are not same and never will be. Few means close to nothing if at all. The problem is there is no equivalent phrase in the Indian languages so to us few and a few sound the same.
Parallely: No such word exists yet. Till such time lets try to 'do things in parallel'.
Prepone: Existence of postpone doesn't automatically mean there is a prepone. One way might be to 'bring things forward'.
Updation: No, updation of a document doesn't mean anything. Say the document/ policy update will happen in a month.
will add to the list as more words come to my mind.
last but the least our fascination for American words just keeps growing, don't freak out when we pop a darn freaking comment just to make it sound awesome. too good, eh?
Sunday, 26 June 2011
The right, left outer join!
col(+) = val
is not quite the same as
col = val
or col is NULL.
In the first case, the RDBMS will pick up all values of that column whether they are NULL or NOT NULL, and display it as NULL if the column value does not match val. The number of rows remains unaffected.
In the second case, however, the RDBMS will pick up only those rows which has the column value = val as well as those rows which are NULL (right table NULL rows which don't match due to the left outer join, OR which are just NULL);
Illustration below:
SQL> create table emp
2 ( emp_id number,
3 emp_name varchar2(30),
4 dept_no number);
Table created.
SQL> create table dept
2 (dept_id number,
3 dept_name varchar2(30)
4 );
Table created.
SQL> insert into emp values(100, 'Adam', 10);
1 row created.
SQL> insert into emp values(200, 'Bud', 20);
1 row created.
SQL> insert into emp values(300, 'Carling', 10);
1 row created.
SQL> insert into emp values(400,'Don', NULL);
1 row created.
SQL> insert into emp values(500, 'Emma',40);
1 row created.
SQL> insert into dept values(10, 'Accounts');
1 row created.
SQL> insert into dept values(20, 'Sales');
1 row created.
SQL> insert into dept values(30, 'IT');
1 row created.
SQL> commit;
Commit complete.
SQL> column emp_name format a8;
SQL> select *
2 from emp,
3 dept
4 where emp.dept_no = dept.dept_no;
where emp.dept_no = dept.dept_id;
EMP_ID EMP_NAME DEPT_NO DEPT_ID DEPT_NAME
---------- -------- ---------- ---------- ------------------------------
100 Adam 10 10 Accounts
200 Bud 20 20 Sales
300 Carling 10 10 Accounts
SQL> select *
2 from emp,
3 dept
4 where emp.dept_no = dept.dept_id(+)
5 order by emp_id;
EMP_ID EMP_NAME DEPT_NO DEPT_ID DEPT_NAME
---------- -------- ---------- ---------- ------------------------------
100 Adam 10 10 Accounts
200 Bud 20 20 Sales
300 Carling 10 10 Accounts
400 Don
500 Emma 40
SQL> select *
2 from emp,
3 dept
4 where emp.dept_no = dept.dept_id(+)
5 and dept.dept_name (+) = 'Sales'
6 order by 1;
EMP_ID EMP_NAME DEPT_NO DEPT_ID DEPT_NAME
---------- -------- ---------- ---------- ------------------------------
100 Adam 10
200 Bud 20 20 Sales
300 Carling 10
400 Don
500 Emma 40
Note that rows which have dept_name = 'Accounts' are also picked up, but the column value is displayed as NULL since it does not match 'Sales'.
SQL> select *
2 from emp,
3 dept
4 where emp.dept_no = dept.dept_id(+)
5 and (dept.dept_name = 'Sales'
6 or
7 dept.dept_name is null)
8 order by emp_id;
EMP_ID EMP_NAME DEPT_NO DEPT_ID DEPT_NAME
---------- -------- ---------- ---------- ------------------------------
200 Bud 20 20 Sales
400 Don
500 Emma 40
This is what I was expecting - to filter out non-Sales rows if present.
The same example with ANSI join syntax:
SQL> select *
2 from emp
3 left outer join dept
4 on emp.dept_no = dept.dept_id;
EMP_ID EMP_NAME DEPT_NO DEPT_ID DEPT_NAME
---------- -------- ---------- ---------- ------------------------------
300 Carling 10 10 Accounts
100 Adam 10 10 Accounts
200 Bud 20 20 Sales
400 Don
500 Emma 40
SQL> select *
2 from emp
3 left outer join dept
4 on emp.dept_no = dept.dept_id
5 where dept.dept_name = 'Sales'
6 order by emp_id;
EMP_ID EMP_NAME DEPT_NO DEPT_ID DEPT_NAME
---------- -------- ---------- ---------- ------------------------------
200 Bud 20 20 Sales
SQL> select *
2 from emp
3 left outer join dept
4 on emp.dept_no = dept.dept_id
5 and dept.dept_name = 'Sales'
6 order by emp_id;
EMP_ID EMP_NAME DEPT_NO DEPT_ID DEPT_NAME
---------- -------- ---------- ---------- ------------------------------
100 Adam 10
200 Bud 20 20 Sales
300 Carling 10
400 Don
500 Emma 40
Note that (+) behaviour in the ANSI join when the filter condition is placed in the join condition rather than in the where clause.
SQL> select *
2 from emp
3 left outer join dept
4 on emp.dept_no = dept.dept_id
5 where (dept_name = 'Sales'
6 or
7 dept_name is NULL)
8 order by emp_id;
EMP_ID EMP_NAME DEPT_NO DEPT_ID DEPT_NAME
---------- -------- ---------- ---------- ------------------------------
200 Bud 20 20 Sales
400 Don
500 Emma 40
Tuesday, 31 May 2011
The Unknown Range and the No Range/Case
Teradata does allow two partitions to be created – the ‘UNKNOWN’ partition and the ‘NO RANGE/NO CASE’ partition. These need to explicitly defined by the user and are part of the 65536 ( 216-1) allowed partitions.
No Range/No Case: Define for all those values which are valid on their own but would not fit any defined range or case. Say the table is partitioned on employee joining date and the range is given as ‘2000-01-01’ (may be the first year of the company’s existence), and the operator is not sure about a joining date of a certain employee so she puts 1999-12-31. Since this is a valid date, it will go to the No Range partition if it has been defined as part of the table DDL. If not, the record will be rejected. Consider the following example.
DROP TABLE q2;
CREATE TABLE q2
( emp_id INTEGER,
emp_name VARCHAR(50),
dept_no BIGINT,
join_dt DATE)
PRIMARY INDEX (emp_id, join_dt)
PARTITION BY RANGE_N (join_dt BETWEEN '2000-01-01' AND '2013-12-31' EACH INTERVAL '1' YEAR);CREATE TABLE completed. 0 rows returned. Elapsed time = 00:00:00
INSERT INTO q2 VALUES(1004, 'Darren', 2, '1999-12-23');
INSERT Failed. [5728] Partitioning violation for table tduser.q2.
ALTER TABLE q2
MODIFY PRIMARY INDEX
ADD RANGE NO RANGE;ALTER TABLE completed. Elapsed time = 00:00:00
INSERT INTO q2 VALUES(1004, 'Darren', 2, '1999-12-23');
INSERT completed. 1 rows processed. Elapsed time = 00:00:00
Output directed to Answer window
Notice that the row is successfully inserted after the NO RANGE is added to the table partition using the ALTER TABLE statement.
Unknown: A similar necessity is for columns for which the value is not defined and that is possible only when it has a NULL value. A partitioning column defined as NOT NULL obviates the need for this partition.
INSERT INTO q2 VALUES(1005, 'Evan', 2, NULL);
INSERT Failed. [5728] Partitioning violation for table tduser.q2.
ALTER TABLE q2
MODIFY PRIMARY INDEX
ADD RANGE UNKNOWN;ALTER TABLE completed. Elapsed time = 00:00:00
INSERT INTO q2 VALUES(1005, 'Evan', 2, NULL);
INSERT completed. 1 rows processed. Elapsed time = 00:00:00
Output directed to Answer window
Let’s insert three more rows into this table which go into other partitions.
INSERT INTO q2 VALUES(1001, 'Alice', 2, '2002-12-23');
INSERT INTO q2 VALUES(1002, 'Bob', 2, '2001-12-23');
INSERT INTO q2 VALUES(1003, 'Cathy', 2, '2003-12-23');INSERT completed. 3 rows processed. Elapsed time = 00:00:00
Output directed to Answer window
Let’s check out which row has gone into which partition.
SELECT PARTITION, Q2.* FROM q2
ORDER BY emp_id;Dropping a partition: Note that a range partition can only be dropped from one side of the range. So the following command throws an error.
ALTER TABLE q2
MODIFY PRIMARY INDEX
DROP RANGE BETWEEN '2003-01-01' AND '2005-12-31' EACH INTERVAL '1' YEAR;ALTER TABLE Failed. [3732] The facility of altering the partitioning of a non-empty table by adding/dropping partitions other than partitions at the beginning or end of a RANGE_N partitioning expression at partitioning level one has not been implemented yet.
With (or without) delete: Teradata tries to do the following when a drop partition statement is issued:
1. Check if the partitioning allows the rows to be relocated to some other partition. No Range or No Case for example. Note there could be other possible destination partitions for Case_N.
2. If those rows can be shifted to any other partition(s) then the current partition is dropped and the rows are relocated.
3. If those rows cannot be relocated then check for the s WITH DELETE statement. If present then delete corresponding rows and drop partition. If not present, then fail the statement.
Note again: that the presence of WITH DELETE is material only when Teradata cannot find a partition for the rows to relocate to.
When Teradata can find a suitable partition, it will merely relocate the rows. Let’s consider the same example:
ALTER TABLE q2
MODIFY PRIMARY INDEX
DROP RANGE BETWEEN '2003-01-01' AND '2013-12-31' EACH INTERVAL '1' YEAR
WITH DELETE; -- The presence of the WITH DELETE is not material here since these rows are still defined under the 'NO RANGE' partitionALTER TABLE completed. Elapsed time = 00:00:00
Let’s check the number of partitions and the rows in them:
SELECT PARTITION, Q2.* FROM q2
ORDER BY emp_id;Note that the number of partitions has been shrunk to 5 from the previous 16, and Cathy shares the same ‘NO RANGE’ partition with Darren.
partition | emp_id | emp_name | dept_no | join_dt |
3 | 1,001 | Alice | 2 | 12/23/2002 |
2 | 1,002 | Bob | 2 | 12/23/2001 |
4 | 1,003 | Cathy | 2 | 12/23/2003 |
15 | 1,004 | Darren | 2 | 12/23/1999 |
16 | 1,005 | Evan | 2 | ? |
Where With Delete becomes effective: If the UNKNOWN partition is dropped, the WITH DELETE has to specified, else the command will fail as described afore.
ALTER TABLE q2
MODIFY PRIMARY INDEX
DROP RANGE UNKNOWN;ALTER TABLE Failed. [5728] Partitioning violation for table tduser.q2.
Why? Because the row has no other partition to go to.
ALTER TABLE q2
MODIFY PRIMARY INDEX
DROP RANGE UNKNOWN
WITH DELETE;ALTER TABLE completed. Elapsed time = 00:00:00
SELECT PARTITION, Q2.* FROM q2
ORDER BY emp_id;Update on partitioning column: What if Alice’s joining date is suddenly overwritten by an application error?
UPDATE q2
SET join_dt = NULL
WHERE emp_id = 1001;UPDATE Failed. [5728] Partitioning violation for table tduser.q2.
Ha! the system won’t allow such a change as the row can’t be put into any known partition.
partition | emp_id | emp_name | dept_no | join_dt |
3 | 1,001 | Alice | 2 | 12/23/2002 |
2 | 1,002 | Bob | 2 | 12/23/2001 |
4 | 1,003 | Cathy | 2 | 12/23/2003 |
4 | 1,004 | Darren | 2 | 12/23/1999 |
Sunday, 26 December 2010
BTEQ (be-tech)
BTEQ is Batch (or Basic) TEradata Query utility. Non SQL commands in BTEQ begin with a dot (.)
I was not able to login to BTEQ for an entire evening. I got the syntax from Teradata User's Guide: The Ultimate Companion, Third Edition (ISBN 9780970498069):
.logon nodeName/userName
The hosts file in my computer looks like this:
127.0.0.1 localhost localtdcop1
::1 localhost
so the string I entered was
.logon localtdcop1/tduser
but it threw an invalid logon error.
Teradata BTEQ 13.00.00.03 for WIN32.
Copyright 1984-2009, Teradata Corporation. ALL RIGHTS RESERVED.
Enter your logon or BTEQ command:
.logon localtdcop1/tduser
.logon localtdcop1/tduser
Password:
*** CLI error: CLI2: BADLOGON(303): Invalid logon string.
*** Return code from CLI is: 303
*** Error: Logon failed!
*** Total elapsed time was 15 seconds.
Teradata BTEQ 13.00.00.03 for WIN32. Enter your logon or BTEQ command:
What makes it worse is if you put any arbitary string in the nodeName it throws a different error, which leads you to think you are doing this part right.
Teradata BTEQ 13.00.00.03 for WIN32. Enter your logon or BTEQ command:
.logon arbStr/tduser
.logon arbStr/tduser
Password:
*** CLI error: MTDP: EM_NOHOST(224): name not in HOSTS file or names database.
*** Return code from CLI is: 224
*** Error: Logon failed!
*** Total elapsed time was 15 seconds.
The localtdcop1 has been created by Teradata and this is the string accepted by other Teradata ODBC Clients like SQL Assistant etc. While logging in to BTEQ, the cop1 part needs to be truncated so the login string looks like this:
.logon localtd/tduser
.logon localtd/tduser
Password:
*** Logon successfully completed.
*** Teradata Database Release is 13.00.00.12
*** Teradata Database Version is 13.00.00.12
*** Transaction Semantics are BTET.
*** Character Set Name is 'ASCII'.
*** Total elapsed time was 1 second.
Also, the node Name needs to be 8 characters or less or the system will throw the 'Invalid Logon String' error anyway. There are two modes of transactions Teradata supports - BTET (see the bold text above) and ANSI. BTEQ by defaults creates a BTET session but you can create an ANSI session by entering the command before the logon command.
Teradata BTEQ 13.00.00.03 for WIN32. Enter your logon or BTEQ command:
.set session transaction ansi
.set session transaction ansi
Teradata BTEQ 13.00.00.03 for WIN32. Enter your logon or BTEQ command:
.logon localtd/tduser
.logon localtd/tduser
Password:
*** Logon successfully completed.
*** Teradata Database Release is 13.00.00.12
*** Teradata Database Version is 13.00.00.12
*** Transaction Semantics are ANSI.
*** Character Set Name is 'ASCII'.
*** Total elapsed time was 1 second.
If you issue this command after logging in, you get this error:
.set session transaction ansi
*** Error: You must not be logged on to change the SQLFLAG or
TRANSACTION settings.
BTET stands for Begin Transaction End Transaction. Any command or set of command entered is automatically commited or rolled back in whole. Let’s see some examples.
/* Create a table first*/
CREATE TABLE Order_Table (
Order_Number INTEGER NOT NULL,
Customer_Number INTEGER NOT NULL,
order_date DATE,
order_total DECIMAL(10,2)
);
select * from order_table;
*** Query completed. 4 rows found. 4 columns returned.
*** Total elapsed time was 1 second.
Order_Number | Customer_Number | order_date | order_total |
------------ | --------------- | ---------- | ------------ |
1005 | 23005 | 08/08/22 | 2300.00 |
1002 | 23002 | 10/07/08 | 1000.00 |
1003 | 23003 | 09/09/21 | 2000.00 |
1001 | 23001 | 10/07/07 | 1000.00 |
The command below has a different outcome on the database depending on whether the session is ANSI or BTET.
insert into order_table(1004,23004,'2008-08-22', 2300);
In ANSI this needs to explicitly committed with the commit statement, while in BTET it is automatically commited. I noticed a peculiar effect of using an ANSI session. Any uncommited DML statement tends to hold up all other sessions on the table object. Here are the details.
Open an ANSI session and any other session (both on BTEQ). Issue the DML command in the ANSI session like the one I have shown above .
Teradata BTEQ 13.00.00.03 for WIN32. Enter your logon or BTEQ command
.logon localtd/tduser
.logon localtd/tduser
Password:
*** Logon successfully completed.
*** Teradata Database Release is 13.00.00.12
*** Teradata Database Version is 13.00.00.12
*** Transaction Semantics are ANSI.
*** Character Set Name is 'ASCII'.
*** Total elapsed time was 1 second.
BTEQ -- Enter your DBC/SQL request or BTEQ command:
insert into order_table(1004,23004,'2008-08-22', 2300);
insert into order_table(1004,23004,'2008-08-22', 2300);
*** Insert completed. One row added.
*** Total elapsed time was 1 second.
BTEQ -- Enter your DBC/SQL request or BTEQ command:
select * from order_table;
select * from order_table;
*** Query completed. 5 rows found. 4 columns returned.
*** Total elapsed time was 1 second.
Order_Number | Customer_Number | order_date | order_total |
------------ | --------------- | ---------- | ------------ |
1005 | 23005 | 08/08/22 | 2300.00 |
1004 | 23004 | 08/08/22 | 2300.00 |
1002 | 23002 | 10/07/08 | 1000.00 |
1003 | 23003 | 09/09/21 | 2000.00 |
1001 | 23001 | 10/07/07 | 1000.00 |
Now issue a simple select statement on this table from any another session. Notice that this commands goes into wait and there is no output. Itwill remain in this state till a rollback or commit issued in the first session.
BTEQ -- Enter your DBC/SQL request or BTEQ command:
commit;
commit;
*** COMMIT done.
*** Total elapsed time was 1 second.
The output from the other session(s) can now be seen.
BTEQ -- Enter your DBC/SQL request or BTEQ command:
select * from order_table;
select * from order_table;
*** Query completed. 5 rows found. 4 columns returned.
*** Total elapsed time was 4 minutes and 24 seconds.
Order_Number | Customer_Number | order_date | order_total |
------------ | --------------- | ---------- | ------------ |
1005 | 23005 | 08/08/22 | 2300.00 |
1004 | 23004 | 08/08/22 | 2300.00 |
1002 | 23002 | 10/07/08 | 1000.00 |
1003 | 23003 | 09/09/21 | 2000.00 |
1001 | 23001 | 10/07/07 | 1000.00 |
For Oracle users this might come as a big surprise, Oracle maintains read consistency as well but does not lock the transaction. This means if an update is not commited, other select (read) statements will continue to see the integrated data (as if no transaction has yet taken place). Oracle Rocks!
1st Oracle session:
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
26-JUN-11 02.52.57.916000 PM -05:00
SQL> select *
2 from emp
3 order by emp_id;
EMP_ID EMP_NAME DEPT_NO
---------- ------------------------------ ----------
100 Adam 10
200 Bud 20
300 Carling 10
400 Don
500 Emma 40
SQL> update emp
2 set emp_name = 'Dana'
3 where emp_name = 'Don';
1 row updated.
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
26-JUN-11 02.53.56.372000 PM -05:00
SQL> select *
2 from emp
3 order by emp_id;
EMP_ID EMP_NAME DEPT_NO
---------- ------------------------------ ----------
100 Adam 10
200 Bud 20
300 Carling 10
400 Dana
500 Emma 40
The first session sees the changed but uncommitted data at 2:53.
Another Oracle Session opened at 2:55:
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
26-JUN-11 02.55.17.048000 PM -05:00
SQL> select *
2 from emp
3 order by emp_id;
EMP_ID EMP_NAME DEPT_NO
---------- -------- ----------
100 Adam 10
200 Bud 20
300 Carling 10
400 Don
500 Emma 40
Another session does not see the changed uncommited data, because of read consistency. But unlike Teradata Oralce does not hold back this session until the first one commits or rolls back.
BTEQ also allows for taking batch input from a file on disk and displaying the output on a file, if need be. It is akin to the UNIX format of accepting input from a pipe and storing the output on another pipe.
C:\BTEQ < c:\temp\BTEQ_run_script.txt >c:\BTEQ_results.txt
The contents of the input file looks the one below. Note that the password is not specified, it is entered interactively at runtime.
.set session transaction ansi
.logon localtd/tduser
insert into order_table(1004,23004,'2008-08-22', 2300);
rollback;
insert into order_table(1005,23005,'2008-08-22', 2300);
select * from order_table;
commit;
.quit;
Export Output of a SQL onto a flat file
BTEQ -- Enter your DBC/SQL request or BTEQ command:
.EXPORT REPORT FILE c:\temp\order.txt
.EXPORT REPORT FILE c:\temp\order.txt
*** To reset export, type .EXPORT RESET
BTEQ -- Enter your DBC/SQL request or BTEQ command:
select * from order_table;
select * from order_table;
*** Query completed. 5 rows found. 4 columns returned.
*** Total elapsed time was 1 second.
BTEQ -- Enter your DBC/SQL request or BTEQ command:
.EXPORT RESET
.EXPORT RESET
*** Output returned to console.
Contents of the c:\temp\order.txt
Order_Number Customer_Number order_date order_total
------------ --------------- ---------- ------------
1005 23005 08/08/22 2300.00
1004 23004 08/08/22 2300.00
1002 23002 10/07/08 1000.00
1003 23003 09/09/21 2000.00
1001 23001 10/07/07 1000.00
Note the string REPORT after the EXPORT command, the other options being DATA and INDICDATA. These options are for storing binray data into a flat file which helps in faster loading and exporting.
Each call to the EXPORT command appends information to the file (if it already exists).
Note that I have formatted the some text output into tables for better readibility.
Subscribe to:
Posts (Atom)