-
Subject: extracting a pattern from file
-
Question posted by su Jun 3, 2005 1:50 pm
Data Format file
*****************************************************************
Table Name: Sales_Orders (dbid=6,objid=1115151018, lockscheme=Da)
Page Locks SH_PAGE UP_PAGE EX_PAGE
Grants: 0 0 7
Table Name: UMUM_UTIL_MGT (dbid=4, objid=1274956114,lockscheme=Al )
Page Locks SH_PAGE UP_PAGE EX_PAGE
Grants: 0 0 4
Table Name: UMUM_UTIL_NGT (dbid=4, objid=1274956123,lockscheme=Al )
Page Locks SH_PAGE UP_PAGE EX_PAGE
Grants: 0 0 3
*****************************************************************
Actually i tried hard to get the output in following format but no
luck.
output format
******************************************************************
Sales_Orders lockscheme=Da 0 0 7
UMUM_UTIL_MGT lockscheme=Al 0 0 4
UMUM_UTIL_NGT lockscheme=Al 0 0 3
******************************************************************
Response posted by Ed Morton Jun 3, 2005 2:59 pm
awk -v RS="" -v FS="[ ,)(]*" '{print $3,$6,$12,$13,$14}' file
-
Subject: remove last and first three lines from a file
-
Question posted by Prince - 22 Dec 2004 16:11:28 -0800
I have a file with following contents.
COL1
--------------------------------
This is row number one
This is row number two, spanning
multiple lines
This is row number three
3 record(s) selected.
After extracting the number of "record(s) selected" (I can use AWK
here), I want to remove the first and last three lines from the file.
The script is in ksh and I can use sed or awk or something similar (no
perl though). What is the best way to do accomplish this? I want the
result to look like the following, with no headers and trailers.
This is row number one
This is row number two, spanning
multiple lines
This is row number three
Response posted by Bill Marcum - Wed, 22 Dec 2004 20:39:15 -0500
awk 'NR>6{print last3[NR%3]} {last3[NR%3]=$0}'
Response posted by Janis Papanagnou - Thurs, Dec 23 2004 1:04 pm
Generalized form of Bill's solution, allowing the number of deleted
lines at the beginning and end to be specifed as positional
parameters.
awk -v f=${1:?} -v r=${2:?} 'NR>f+r {print buf[NR%r]} {buf[NR%r]=$0}'
-
Subject: rules on stdout and stderr
-
Question posted by Laurent Deniau 2004-12-02 08:12:04 PST
I would like to filter both stdout and stderr of a program and then
print the filtered messages on their respective source. So I was
wondering if there is a way to do something like:
handle == stdout && /regex/ { rule for stdout }
handle == stderr && /regex/ { rule for stderr }
and then
prog | filter.awk
would process both channels separately
-- stdout --> -- stdout -->
prog filter.awk shell
-- stderr --> -- stderr -->
Response posted by Stephane CHAZELAS 2004-12-03 02:15:45 PST
{
{
cmd 3>&- 4>&- | awk 'stdout filter' >&3 2>&4 3>&- 4>&-
} 2>&1 | awk 'stderr filter' >&2 3>&- 4>&-
} 3>&1 4>&2
-
Subject: Problem With Regular Expression
-
Question posted by Tony George 30 Nov 2004 16:46:15 -0500
... extract just the basic pathname from the full version extended
pathname. For example:
/vobs/test@@/main/tony/1/pets/main/fun/jim/22/cats/main/steve/2/siamese/main/5
The actual filename I'm trying to extract here is
/vobs/test/pets/cats/siamese
What I want to get rid of are all the branch
names and version numbers in between subdirectories and the final file name.
Response posted by Walter Briscoe 1 Dec 2004 08:44:17 +0000
sed 's:@@::;s:/main/[^1-9]*[1-9][0-9]*::g'
Response posted by Stephane Chazelas 1 Dec 2004 08:46:23 +0000
sed 's@/main[^0-9]*[0-9]\{1,\}@@g'
-
Subject: How to grep for text1 and text2 and text3 on three different lines?
-
Question posted by Suhas Date: 23 Nov 2004 12:46:38 -0800
I have an ascii report file has the following text:
Net
Tare
Gross
Note that the information is one 3 separate lines & not necessarily
starting at the first column. How can I write a condition:
if "Net" is found AND "Tare" is found AND "Gross" is found THEN true
ELSE false.
Response posted by Stephane CHAZELAS Tue, 23 Nov 2004 21:08:50 +0000
if awk '/Net/ {n=1} /Tare/ {t=1} /Gross/ {g=1} END {exit(3-n-t-g)}' < report.file
then
echo "3 ones found"
else
echo "not 3 ones found"
fi
The input redirection from the "report.file" was implied but not explicitly stated in the original newsgroup posting.
-
Subject: Fields postion based on a string ...
-
Question posted by papi 2004-11-14 16:22:16 PST
I have a program whose output consists of a variable number of
fields, separated by spaces, out of which one has a known value
(string). I would like to obtain the first field, the one before and the
one after the known value one, as in the example below:
$ program
x y z m w v t
p r m s o q
and I would like to be left with:
x z w
p r s
based on the known "m", of course, which is a string.
Response posted by Ed Morton 2004-11-14 17:30:34 PST
awk '{for (i=1; i<=NF; i++) if ($i == "m") print $1, $(i-1), $(i+1) }'
-
Subject: Searching a string with a date in a file more previous line
-
Question posted by Shiva MahaDeva 2004-11-14 17:24:23 PST
I'm looking for a way to search a string an the actual date in the
same line in a file, and show me the previous line too. How could I
make this ?
cats 11-05-04
dogs 11-15-04
rats 11-20-04
bees 10-05-04
I want search in this file the second line (dogs and 11-15-04,
show-me this line and the previous line):
cats 11-05-04
dogs 11-15-04
How could I make this ? Thanks in advance.
Response posted by Bill Marcum 2004-11-15 00:00:02 PST
awk '/string/{print prev;print $0} {prev=$0}'
-
Subject: sed/awk - 2 simple question
-
Question posted by Robert Tulke 2004-10-20 15:09:56 PST
i've a file with following content:
---snip
helpc.de
Domain Beantragung hatte DNS-Fehler
hessenbruch-personal.de
Domain Beantragung hatte DNS-Fehler
hms-lamprecht.de
Domain Beantragung hatte DNS-Fehler
..
..
..
---snap
but i need the second line, direct after first line with two tabs
and delete the 3. line... looks like here is the right content
helpc.de Domain Beantragung hatte DNS-Fehler
hessenbruch-personal.de Domain Beantragung hatte DNS-Fehler
hms-lamprecht.de Domain Beantragung hatte DNS-Fehler
Response posted by Stephane CHAZELAS 2004-10-20 23:46:58 PST
paste -d '\t\t\0' - /dev/null - - < your-file.txt
-
Subject: expr input from a file
-
Question posted by Faeandar 2004-10-06 16:05:30 PST
I've got a file that has numbers on every line. I want to add every
line and get a total.
Response posted by Chris F.A. Johnson 2004-10-06 16:42:43 PST
echo $(( `tr -s "\012 " "[+*]" < $filename` 0 ))
Response posted by Seb 2004-10-06 16:26:07 PST
awk '{sum += $0} END {print sum}' yourFile
-
Subject: Replace a line with SED / Regex?? Need help.
-
Question posted by WAS Admin 2004-10-04 19:02:51 PST
What I'm trying to do is find a specific line in an xml file and the
replace the line after it. For example:
The xml file name is web.xml. Below, the line with "parm1" is the one
I'll need to change to read
<param-value>parm2</param-value>
The contents of this line could change from time to time but the line
above it (the one containing WebRoot)will always be the same.
<context-param id="WebParm_12345">
<param-name>WebRoot</param-name>
<param-value>parm1</param-value>
<description>This is an xml file</description>
</context-param>
Is there a way to use a regex to search for the
"<param-name>URNRoot</param-name>" line and then replace the line
below with what I want?
Response posted by William Park 2004-10-04 19:26:13 PST
sed '/WebRoot/{n;s/parm1/parm2/;}'
-
Subject: AIX Script to Summarize By First Column By Adding Values in Numerical Columns
-
Question posted by SAP BASIS Consultant 2004-09-29 11:06:54 PST
I have a file which looks in style as follows (It is the output of ps
aux, though that it not important):
(The COLs are not part of the file..For description)
COL1 COL2 COL3 COL4 COL5 COL6
name1 xxx xxx xxx 1000 11
name1 xxx xxx xxx 50 40
name2 xxx xxx xxx 30 5
name1 xxx xxx xxx 50 10
name2 xxx xxx xxx 100 4
I would like to summarize the data by the first column, ignoring
columns #2, #3 and #4, and adding the values in col#5 and #6.
The output would be as follows: (For name1, 1100=1000+50+50,
etc..)
name1 1100 61
name2 130 9
Response posted by Chris F.A. Johnson 2004-09-29 13:38:51 PST
awk '{x[$1]+=$5;y[$1]+=$6}END{for(n in x) printf "%s\t%d\t%d\n",n,x[n],y[n]}'
-
Subject: Sorting by basename of file
-
Question posted by Vikas Agnihotri 2004-09-27 11:01:19 PST
I have a file containing 2 full pathnames per line like
/full/path/.../to/file /another/full/.../path/to/file
How can I sort this file by the "basename" of the second filename on the line?
Response posted by Michael Tosch 2004-09-27 11:36:32 PST
awk -F/ '{print $NF"/"$0}' file | sort | cut -d/ -f2-
-
Subject: stupid awk q
- Question posted by foo@bar.com 2004-09-07 14:38:35 PST
Okay, so I want to sum up a stream a piped in group of numbers. I would
normally do this with awk...but the problem with either awk or how I'm
using awk is that it spits out a running total until the final sum. I'd
rather just have the final sum w/o a running total...ie I want the rough
total memory footprint of apache so I run:
ps -e -o 'vsz fname' |
grep httpd |
awk '{s += $1}{print s}'
But if I want just the total I have to pipe it though 'tail -1' Am I
missing something about awk that would avoiding the running total?
Response posted by Seb 2004-09-07 14:51:03 PST
Response posted by Chris F.A. Johnson 2004-09-07 14:51:13 PST
ps -e -o 'vsz fname' | awk '/httpd/ {s += $1} END {print s}'
-
Subject: BASH: reading file lines with spaces
-
Question posted by Mark A Framness 2004-09-06 11:41:57 PST
I have a file consisting of multiple lines.
FAX DESTINATION ONE@5551111
FAX DESTINATION TWO@5552222
FAX DESTINATION THREE@5553333
FAX DESTINATION FOUR@5554444
I want to read the above file and append each line to a sendfax
command so a sendfax command like:
sendfax -d FAX DESTINATION ONE@5551111\
-d FAX DESTINATION TWO@5552222 \
-d FAX DESTINATION THREE@5553333 \
-d FAX DESTINATION FOUR@5554444 \
faxfile.pdf
is built and executed.
Response posted by Ed Morton 2004-09-06 17:25:10 PST
sendfax `awk '{printf " -d %s",$0}' file` faxfile.pdf
-
Subject: changing a string in all users home-directory
-
Question posted by Tim Moor 2004-08-25 23:23:18 PST
i have to change a string in the .profile-file of all my users. all
.profiles are located within the /home directory. the string i'd like to
change looks like this:
/bin/old
and should be replaced with
/bin/new
Response posted by Alexis Huxley 2004-08-25 23:58:11 PST
perl -pi -e 's/\/bin\/old\b/\/bin\/new/' /home/*/.profile
Response posted by Stephane Chazelas 2004-08-26 00:19:34 PST
perl -pi -e 's,/bin/old,/bin/new,g' /home/*/.profile
You may prefer using:
perl -pi.before-my-change -e 's,/bin/old,/bin/new,g' /home/*/.profile
To keep a copy of the previous .profile file in case there has
been a problem in the substitution.
-
duplicate the paragraph n times
-
Question posted by Prince Kumar 2004-08-18 17:46:54 PST
I have a file with the following contents.
%cat test.txt
start para para_1 [
text1
text2
end ]
start para para_2 [
text1
text2
end ]
I want to duplicate each of these paragraphs 5 times.
Using sed/awk/perl, how would I achieve this?
Response posted by Janis Papanagnou 2004-08-18 18:57:41 PST
With square brackets as block boundaries:
awk 'BEGIN { RS=""; ORS="\n\n" }
/\[/,/\]/ { print ; print ; print ; print ; print }'
or using a complex block pattern to avoid symbol clashes:
awk 'BEGIN { RS=""; ORS="\n\n" }
/start para .*\[/,/end \]/
{ print ; print ; print ; print ; print }'
Response posted by Brendon Caligari 2004-08-18 19:25:53 PST
perl -pe 'undef $/; s/^start para.*?end ]$/("$&\n\n" x 4) . $&/mesg' filename
Response posted by John W. Krahn 2004-08-19 00:27:35 PST
perl -00ne'$a = $_; print $a for 1 .. 5' test.txt
Response posted by rakesh sharma 2004-08-19 02:05:00 PST
sed -e '
/^start para/,/^end ]/!b
H;/^end ]/!d
g;G;G;G;G
x;s/.*//;x
' test.txt
-
Replace newlines selectively ( using sed )
-
Question posted by Martin Schweikert 2004-08-03 07:47:01 PST
I would like to tidy up bdf (HP-UX) output. Example:
/dev/vgcondort/oracle
4096000 1797302 2155065 45%
/opt/DOCUMENTUM/condort/oracle/product
/dev/vgcondort/tmp 2048000 6839 1913614 0%
/opt/DOCUMENTUM/condort/tmp
/dev/vgcondort/share
409600 96442 293618 25%
The newlines followed by the spaces and a digit should be translated into a blank.
Response posted by Rakesh Sharma 2004-08-03 22:24:36 PST
sed -e '
$!N
s/\n *\([0-9]\)/ \1/;t
P;D
' yourfile
-
How to obtain the age of a file (epoch time stamp)
-
Question posted by kweng 2004-07-29 03:07:41 PST
I was wondering what it the shortest way out to obtain the age of a
file from inside a shell script, without having to calculate from an
'ls -l' with the current time on the system.
Response posted by Stephane Chazeles 2004-07-29 03:33:30 PST
file=/path/to/the/file
cpio -o 2> /dev/null << EOF | od -x | sed -n 'n
s/[^ ]* *\(....\) *\(....\).*/16i\1\2p/;y/abcdef/ABCDEF/;p;q'|dc
$file
EOF
-
Subject:
-
Question posted by
Response posted by