Sunday, August 24, 2008

[HACKERS] IN, BETWEEN, spec compliance, and odd operator names

I was looking just now at gram.y's handling of various peculiar SQL
constructs, and was reminded of a point that's bothered me before,
but I don't recall if it's ever been discussed explicitly on -hackers.

As an example, take the production for BETWEEN ASYMMETRIC:

a_expr BETWEEN opt_asymmetric b_expr AND b_expr
{
$$ = (Node *) makeA_Expr(AEXPR_AND, NIL,
(Node *) makeSimpleA_Expr(AEXPR_OP, ">=", $1, $4, @2),
(Node *) makeSimpleA_Expr(AEXPR_OP, "<=", $1, $6, @2),
@2);
}

Okay, this is a pretty direct implementation of how SQL99 defines the
construct: 8.3 <between predicate> syntax rule 6 saith

"X BETWEEN ASYMMETRIC Y AND Z" is equivalent to "X>=Y AND X<=Z".

But it leaves me feeling dissatisfied. What if the datatype has
standard comparison operators (as identified by a default btree opclass)
but they're not named ">=" and "<=" ? Perhaps more plausibly, what if
those operators exist but aren't in the current search path?

The production for NOT BETWEEN is even more troubling:

a_expr NOT BETWEEN opt_asymmetric b_expr AND b_expr
{
$$ = (Node *) makeA_Expr(AEXPR_OR, NIL,
(Node *) makeSimpleA_Expr(AEXPR_OP, "<", $1, $5, @2),
(Node *) makeSimpleA_Expr(AEXPR_OP, ">", $1, $7, @2),
@2);
}

I can't object too much to the hardwired application of DeMorgan's law
(NOT (A AND B) => (NOT A) OR (NOT B)) but what this also has is a
hardwired assumption that "<" and ">" exist and are the negators of
">=" and "<=" respectively. Probably true, but let's see you find
chapter and verse in the SQL spec to support that...


Seems to me that what this boils down to is whether we want to read the
spec literally ("it says the construct is defined in terms of operators
named >= and <=, therefore we should do that") or by intent (obviously
what they *want* is a construct that behaves sensibly in terms of the
datatype's semantics).

We are more than a bit schizophrenic on this point --- in different
parts of the system you can find these things being done both ways.
There is plenty of code that insists on finding a default btree opclass
to define notions of "less" or "greater"; but we have these purely
name-based transformations in gram.y, and I think there are some other
parts of the parser that do similar things.

I'm not particularly eager to start changing things in this area right
now, but it seems to me that it'd be a good idea to establish a project
policy about what we consider to be the preferred behavior, with an eye
to eventually migrating the parts of the system that don't conform.

My own feeling is that we should avoid imputing particular semantics
to particular operator names, and so these constructs should always be
defined by reference to operators found in a default opclass for the
datatype, rather than by specific operator names. However, that way
will likely take more code and cycles to implement than purely
name-based definitions; and there is also the argument that it violates
the in-so-many-words definitions given by the spec.

Comments?

regards, tom lane

PS: there are some other issues here, like whether BETWEEN should be
allowed to cause double evaluation of its left-hand argument, and
whether we wouldn't like it to get reverse-listed by ruleutils.c
in the original BETWEEN format rather than as an expanded version.
However, what I'd like to focus on in this particular thread is the
narrow issue of defining the constructs in terms of operator names
vs operator semantics.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] Large number of tables slow insert

I don't know if the max_fsm_relations issue will solve your problem or not.  I do know that you definitely want to increase it to a number larger than the sum of all your tables and indexes -- preferably with room to grow.  Additionally the max_fsm_pages value will likely need to be increased as your data size grows.

I work with about 9000 tables at the moment (growing each day) and do not see your issue.  I do not have indexes on most of my tables, and max_fsm_relations is set to 30000.

Although this will increase the number of tables even more-- you may want to consider partitioning your tables by time:  day or week or month.
This way, you may not even need an index on the date, as it will only scan tables over the date range specified ( NOTE -- this is not true if you use prepared statements -- prepared statements + partitioned tables = performance disaster). 
In addition, this may allow you to add the indexes on the partitioned table at a later date.  For example:

Partitions by week -- the current week's table has no indexes and is thus fast to insert.  But once it becomes last week's table and you are only inserting into a new table, the old one can have indexes added to it -- it is now mostly a read-only table.  In this way, full scans will only be needed for the current week's table, which will most of the time be smaller than the others and more likely be cached in memory as well.  You may want to partition by day or month instead.
You may want to combine several sensors into one table, so that you can partition by day or even hour.  It all depends on how you expect to access the data later and how much you can afford to deal with managing all those tables -- postgres only does some of the partitioning work for you and you have to be very careful with your queries.  There are some query optimizer oddities with partitioned tables one has to be aware of.

On Sun, Aug 24, 2008 at 3:30 PM, Loic Petit <tls.wydd@free.fr> wrote:
Quite a lot of answers !


> Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS will help you? What about optimization on application level?
Yes it's on only one application (through JDBC), optimizations (grouped transaction, prepared statement) will be done but that our very first test in hard condition which scared us all :p.


> Can you tell us what kind of application this is? It sounds like a control systems application where you will write the current values of the sensors with each scan of a PLC.  If so, is that a good idea?  Also is 3,000 sensors realistic? That would be a lot of sensors for one control system.
Our research project is trying to manage large scale sensor network deployments. 3.000 is quite a huge deployment but it can be realistic for huge aggricultural deployment for example.


> That would be an insert plus updates to each of your 6 indexes every 0.33 ms. Is that a good idea?  Is there a better strategy? What are you measuring with the instruments e.g. is this a process plant or manufacturing facility? What will people do with this data?
I try to suppress the indexes the more I can. Actually I only really need the index on timestamp to see for example the last readings, and to search for a historical data by period, the others (values) are more for "when this sensor was over 45ºC" for instance but it can be without indexes (will be slow but less heavy at insert time). I get the data from differents telosb motes that gathers temperature / humidity and light.


> Have you checked what you are bottlenecking on - CPU or disk? Try iostat/top/etc during the inserts. Also check actual disk utilizatio (iostat -x on linux/freebsd; varies on others) to see what percentage of time the disk/storage device is busy.
I saw the results of iostat and top, the postgres process was at 70% cpu . Yes I know that my test machine is not brand new but I have to find a good solution with this.

Ok I just ran some tests. It seems that I spammed too much right after the creation of the tables, thus the vacuum analyse could not be ran. I have better results now :

Average of writing 10 rows in each table
ON 1000 TABLES
    Without indexes at all : ~1.5s
    With only the index on timestamp : ~2.5s
    With all indexes : ~30s

ON 3000 TABLES
    Without indexes at all : ~8s
    With only the index on timestamp : ~45s
    With all indexes : ~3min

I don't know why but the difference is quite huge with indexes  ! When I did my vacuum the system told me about the "max_fsm_relations" (1000). Do you think it will change something (as Scott said). I didn't have time to run tests with vacuum analyze on system table see you tomorow for other news...

Re: [PERFORM] Identifying the nature of blocking I/O

Peter Schuller wrote:

> But in general, it would be very interesting to see, at any given
> moment, what PostgreSQL backends are actually blocking on from the
> perspective of PostgreSQL.

The recent work on DTrace support for PostgreSQL will probably give you
the easiest path to useful results. You'll probably need an OpenSolaris
or (I think) FreeBSD host, though, rather than a Linux host.

--
Craig Ringer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [BUGS] BUG #4368: problem with jdbc postgresql

devi wrote:

> its really much confusing in connectivity..please help me giving answers to
> this question
>
> 1. where to save postgresql-8.2-508.jdbc3
> (all have giving different ideas)
>
> 2.how to set the class path?is i have to download tomcate....
>
> 3.can you please give me a clear picture to set class
> path with postgresql-8.2-508.jdbc3
>
> 4. how to run my java application files

Your question does not describe a bug or problem with PostgreSQL or its
JDBC drivers.

Your questions are all covered in the documentation for the JDBC driver
and as a part of very basic Java use.

You should read:

http://doc.postgresintl.com/jdbc/index.html
http://java.sun.com/docs/books/tutorial/
http://java.sun.com/docs/books/tutorial/jdbc/index.html

Then, if you are still having problems, use Google to search for
information about your problem.

If you STILL cannot find the answer and the question is about PostgreSQL
and Java/JDBC, consider posting a question on the PostgreSQL JDBC
mailing list.

--
Craig Ringer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [PERFORM] Large number of tables slow insert

Quite a lot of answers !

> Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS will help you? What about optimization on application level?
Yes it's on only one application (through JDBC), optimizations (grouped transaction, prepared statement) will be done but that our very first test in hard condition which scared us all :p.

> Can you tell us what kind of application this is? It sounds like a control systems application where you will write the current values of the sensors with each scan of a PLC.  If so, is that a good idea?  Also is 3,000 sensors realistic? That would be a lot of sensors for one control system.
Our research project is trying to manage large scale sensor network deployments. 3.000 is quite a huge deployment but it can be realistic for huge aggricultural deployment for example.

> That would be an insert plus updates to each of your 6 indexes every 0.33 ms. Is that a good idea?  Is there a better strategy? What are you measuring with the instruments e.g. is this a process plant or manufacturing facility? What will people do with this data?
I try to suppress the indexes the more I can. Actually I only really need the index on timestamp to see for example the last readings, and to search for a historical data by period, the others (values) are more for "when this sensor was over 45ºC" for instance but it can be without indexes (will be slow but less heavy at insert time). I get the data from differents telosb motes that gathers temperature / humidity and light.

> Have you checked what you are bottlenecking on - CPU or disk? Try iostat/top/etc during the inserts. Also check actual disk utilizatio (iostat -x on linux/freebsd; varies on others) to see what percentage of time the disk/storage device is busy.
I saw the results of iostat and top, the postgres process was at 70% cpu . Yes I know that my test machine is not brand new but I have to find a good solution with this.

Ok I just ran some tests. It seems that I spammed too much right after the creation of the tables, thus the vacuum analyse could not be ran. I have better results now :

Average of writing 10 rows in each table
ON 1000 TABLES
    Without indexes at all : ~1.5s
    With only the index on timestamp : ~2.5s
    With all indexes : ~30s

ON 3000 TABLES
    Without indexes at all : ~8s
    With only the index on timestamp : ~45s
    With all indexes : ~3min

I don't know why but the difference is quite huge with indexes  ! When I did my vacuum the system told me about the "max_fsm_relations" (1000). Do you think it will change something (as Scott said). I didn't have time to run tests with vacuum analyze on system table see you tomorow for other news...

Re: [PERFORM] Large number of tables slow insert

> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
> of sensors. In order to have good performances on querying by timestamp on
> each sensor, I partitionned my measures table for each sensor. Thus I create
> a lot of tables.
> I simulated a large sensor network with 3000 nodes so I have ~3000 tables.
> And it appears that each insert (in separate transactions) in the database
> takes about 300ms (3-4 insert per second) in tables where there is just few
> tuples (< 10). I think you can understand that it's not efficient at all
> because I need to treat a lot of inserts.
>
> Do you have any idea why it is that slow ? and how can have good insert ?
>
> My test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)
> iostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while
> constant insert

Have you checked what you are bottlenecking on - CPU or disk? Try
iostat/top/etc during the inserts. Also check actual disk utilizatio
(iostat -x on linux/freebsd; varies on others) to see what percentage
of time the disk/storage device is busy.

You say you have 3-4 inserts/second causing 6-7 MB/s writing. That
suggests to me the inserts are fairly large. Are they in the MB range,
which would account for the I/O?

My suspicion is that you are bottlenecking on CPU, since in my
experience there is definitely something surprisingly slow about
encoding/decoding data at the protocol level or somewhere else that is
involved in backend/client communication. I.e, I suspect your client
and/or server is spending a lot of CPU time with things not directly
related to the actual table inserts. If so, various suggested schemes
w.r.t. indexing, table bloat etc won't help at all.

In short, 6-7 MB/second would be fairly consistent with INSERT/COPY
operations being CPU bound on a modern CPU, in my experience. It may
be that this is entirely untrue in your case, but it sounds like a
reasonable thing to at least consider.

--
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller <peter.schuller@infidyne.com>'
Key retrieval: Send an E-Mail to getpgpkey@scode.org
E-Mail: peter.schuller@infidyne.com Web: http://www.scode.org

Re: [GENERAL] seq bug 2073 and time machine

Ivan Sergio Borgonovo wrote:
> I was trying to drop a serial.
> Dropped the default for a column.
> Now it seems I can't drop the sequence since I incurred in:
>
> http://archives.postgresql.org/pgsql-bugs/2005-11/msg00304.php
>
> Is there a way I can still delete the sequence without using a
> backup?

If you're feeling corageous, you can remove the pg_depend entries for
that sequence. Make sure to try it in a transaction and drop the
sequence in that same transaction, so that if you mess up the catalogs
too badly you can get out of it by rolling back.

In recent releases (I think 8.2 and beyond) you can use ALTER SEQUENCE
... OWNED BY to fix the problem without messing with the catalogs
directly. This is obviously recommended if available.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] restoring from dump

Hi Marcelo,

> What happened to that clog file? was it deleted?
I assume you mean 0088.  I think it got created as
a result of my executing (and failing) "dropdb"
command.  Judging from the timestamp, to say otherwise
is not plausible.

> where is 0086, 0087?
I have no idea.  We have had a series of power outages
and maybe they got lost as a result.  I am not convinced,
however.

Instead of your dd suggestion, I thought of copying
0084 and calling it 0085.  I have no idea what consiquence
there might be.  Perhaps, you can comment?

Regards,

Tena Sakai
tsakai@gallo.ucsf.edu




-----Original Message-----
From: Marcelo Martins [mailto:pglists@zeroaccess.org]
Sent: Sun 8/24/2008 12:58 PM
To: Tena Sakai
Subject: Re: [ADMIN] restoring from dump


What happened to that clog file ? was it deleted ?
You could try re-creating it with zero contents as a last resort 
though ..  the transaction will be lost too.

hmm where is 0086, 0087 ?

postgres~$ dd if=/dev/zero of=/var/lib/pgsql/data/pg_clog/0085 bs=256K 
count=1

Marcelo
Linux/Solaris System Administrator
http://www.zeroaccess.org

On Aug 24, 2008, at 12:07 PM, Tena Sakai wrote:

> Hi Jeff,
>
> Quagmire deepens, it seems...
>
> I tried:
>   dropdb <myDB>
> and it told me:
>   dropdb: database removal failed: ERROR:  could not access status 
> of transaction 139602298
>   DETAIL:  Could not open file "pg_clog/0085": No such file or 
> directory.
>
> I went into pg_clog directory and issued:
>   ls -lt | head
> and it told me:
>   -rw-------   1 postgres postgres 163840 Aug 24 09:57 0088
>   drwx------  11 postgres postgres   4096 Aug 22 13:56 ..
>   drwx------   2 postgres postgres   4096 Jun 30 16:03 .
>   -rw-------   1 postgres postgres 262144 Jun  1 20:04 0084
>   -rw-------   1 postgres postgres 262144 Apr  4 15:48 0083
>   -rw-------   1 postgres postgres 262144 Mar 26 18:25 0082
>   -rw-------   1 postgres postgres 262144 Mar 26 01:28 0081
>   -rw-------   1 postgres postgres 262144 Mar 25 23:05 0080
>   -rw-------   1 postgres postgres 262144 Mar 25 20:39 007F
>
> It seems that 0088 was generated at the time very close to
> my issuing dropdb.
>
> What are my options now?
>
> Thank you.
>
> Tena Sakai
> tsakai@gallo.ucsf.edu
>
>
>
> -----Original Message-----
> From: Jeff Frost [mailto:jeff@frostconsultingllc.com]
> Sent: Sat 8/23/2008 10:29 PM
> To: Tena Sakai
> Cc: pgsql-admin@postgresql.org
> Subject: RE: [ADMIN] restoring from dump
>
> On Sat, 23 Aug 2008, Tena Sakai wrote:
>
> > Hi,
> >
> > At psql prompt, I tried:
> >  drop database myDB;
> > and it told me:
> >  ERROR:  cannot drop the currently open database
> >
> > Does this mean I have to issue
> >  pg_ctl stop
> > before I issue
> >  "drop database myDB;"?
> > But if I do so, then how would I get to psql prompt
> > at all?
> >
> > How would I get around this catch-22 situation?
> > Any advice appreciated.
>
> Just connect to a different database to do your drop.  This is what 
> the
> 'postgres' database is often used for.  That's why it is sometimes 
> referred to
> as the maintenance DB.
>
> Or you can use the dropdb command.  Of course I'd still recommend 
> you rename
> the DB till you're sure the restore was successful.
>
>
> >
> > Tena Sakai
> > tsakai@gallo.ucsf.edu
> >
> > -----Original Message-----
> > From: Jeff Frost [mailto:jeff@frostconsultingllc.com]
> > Sent: Fri 8/22/2008 10:28 PM
> > To: Tena Sakai
> > Cc: pgsql-admin@postgresql.org
> > Subject: Re: [ADMIN] restoring from dump
> >
> >
> >
> > Jeff Frost wrote:
> >> Tena Sakai wrote:
> >>>
> >>>> If you want to restore all the databases that were in this
> >>>> postgresql installation at the time of the backup, then the
> >>>> best thing to do is drop all those DBs before attempting the
> >>>> restore.
> >>> Yes, the database in question is built and updated continuously
> >>> from a several sources.  There was a massive power failure,
> >>> a series of them, and things got to be a very inconsistent
> >>> state and therefore we need to go back to a reliable, trustworthy
> >>> backup and then rebuild from there.
> >>>
> >>> What I gather, from your comments, all I have to do would to issue
> >>> a psql command:
> >>>   drop database <dbname>
> >>> then repeat what I did from shell prompt, ie.,
> >>>   zcat <compressed_file> | psql postgres > restore.out 2 > 
> restore.err
> >>>
> >>> Would you mind confirming if I am understanding you correctly?
> >>>
> >> Yes, based on the information you've given us, you should be able 
> to
> >> restore the entire database (and any other databases that were in 
> the
> >> cluster) by first dropping those databases and then issuing the 
> above
> >> command.
> >>
> >> BTW, if you find yourself with an older version of postgresql, this
> >> could be a good opportunity to upgrade.  I'm not sure if you 
> mentioned
> >> what version you were using in your original post.
> > I should also note that you could rename the database instead of
> > dropping it outright, to make sure your restore is effective before
> > dropping it.
> >
> >
>
> --
> Jeff Frost, Owner       <jeff@frostconsultingllc.com>
> Frost Consulting, LLC   http://www.frostconsultingllc.com/
> Phone: 916-647-6411     FAX: 916-405-4032
>
>


Re: [GENERAL] psql \df+ [pattern] with overloaded functions

Richard Broersma escribió:
> Using psql, how can I use specify a pattern for \df+ to only show a
> single overloaded function. The following is a list of my overloaded
> functions, and following this is an example what what happens when I
> try to limit the list by including the function signature.

There's no way to do what you want, because the pattern you give to \df
is only used to constrain the name of the function, not the arguments:

SELECT n.nspname as "Schema",
p.proname as "Name",
pg_catalog.pg_get_function_result(p.oid) as "Result data type",
pg_catalog.pg_get_function_arguments(p.oid) as "Argument data types"
FROM pg_catalog.pg_proc p
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE p.prorettype <> 'pg_catalog.cstring'::pg_catalog.regtype
AND p.proargtypes[0] IS DISTINCT FROM 'pg_catalog.cstring'::pg_catalog.regtype
AND NOT p.proisagg
AND p.proname ~ '^(date_par.*)$'
and p.oid::regclass = 'date_part(text,abstime)' AND pg_catalog.pg_function_is_visible(p.oid)
ORDER BY 1, 2, 4;

You can alter it to use the "regprocedure" to only get the function that
matches a particular signature:

SELECT n.nspname as "Schema",
p.proname as "Name",
pg_catalog.pg_get_function_result(p.oid) as "Result data type",
pg_catalog.pg_get_function_arguments(p.oid) as "Argument data types"
FROM pg_catalog.pg_proc p
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE p.prorettype <> 'pg_catalog.cstring'::pg_catalog.regtype
AND p.proargtypes[0] IS DISTINCT FROM 'pg_catalog.cstring'::pg_catalog.regtype
AND NOT p.proisagg
AND p.proname ~ '^(date_par.*)$'
and p.oid::regproc = 'date_part(text,abstime)'::regprocedure -- <-- here
AND pg_catalog.pg_function_is_visible(p.oid)
ORDER BY 1, 2, 4;

To do what you want you need something like this (note the cast to
regprocedure and from there to text):

alvherre=# SELECT n.nspname as "Schema",
p.proname as "Name",
pg_catalog.pg_get_function_result(p.oid) as "Result data type",
pg_catalog.pg_get_function_arguments(p.oid) as "Argument data types"
FROM pg_catalog.pg_proc p
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE p.prorettype <> 'pg_catalog.cstring'::pg_catalog.regtype
AND p.proargtypes[0] IS DISTINCT FROM 'pg_catalog.cstring'::pg_catalog.regtype
AND NOT p.proisagg
AND p.proname ~ '^(date_par.*)$'
and p.oid::regprocedure::text like 'date_part(text,time%' AND pg_catalog.pg_function_is_visible(p.oid)
ORDER BY 1, 2, 4;
Schema | Name | Result data type | Argument data types
------------+-----------+------------------+-----------------------------------
pg_catalog | date_part | double precision | text, timestamp without time zone
pg_catalog | date_part | double precision | text, timestamp with time zone
pg_catalog | date_part | double precision | text, time without time zone
pg_catalog | date_part | double precision | text, time with time zone
(4 filas)


--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] NOW vs CURRENT_DATE

> I just discover a big not only big huge difference between NOW() and
> CURRENT_DATE.
>
> Did you already know about it and do you know why ?
>
> DELETE FROM blacklist where bl_date < (NOW() - interval '2 DAY');
> on 6 000 000 of records
> 699 ms
>
> DELETE FROM blacklist where bl_date < (CURRENT_DATE - interval '2 DAY');
> on 6 000 000 of records

Is this a one-off run after each other (e.g. with a ROLLBACK in
between)? If so I suspect the difference is due to caching and if you
re-run the NOW() version it would also be fast.

Also, NOW() is equivalent to CURRENT_TIMESTAMP() rather than
CURRENT_DATE(). Perhaps the date vs. timestamp has some implication of
how they query is planned.

--
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller <peter.schuller@infidyne.com>'
Key retrieval: Send an E-Mail to getpgpkey@scode.org
E-Mail: peter.schuller@infidyne.com Web: http://www.scode.org

Re: [PERFORM] Large number of tables slow insert

Loic Petit wrote:
> Hi,
>
> I use Postgresql 8.3.1-1 to store a lot of data coming from a large
> amount of sensors. In order to have good performances on querying by
> timestamp on each sensor, I partitionned my measures table for each
> sensor. Thus I create a lot of tables.
> I simulated a large sensor network with 3000 nodes so I have ~3000
> tables. And it appears that each insert (in separate transactions) in
> the database takes about 300ms (3-4 insert per second) in tables where
> there is just few tuples (< 10). I think you can understand that it's
> not efficient at all because I need to treat a lot of inserts.
Can you tell us what kind of application this is? It sounds like a
control systems application where you will write the current values of
the sensors with each scan of a PLC. If so, is that a good idea? Also
is 3,000 sensors realistic? That would be a lot of sensors for one
control system.
>
> Do you have any idea why it is that slow ? and how can have good insert ?
How often do you write data for a sensor?
Once write per sensor per second = 3,000 writes per second
That would be an insert plus updates to each of your 6 indexes every
0.33 ms .

Is that a good idea? Is there a better strategy? What are you measuring
with the instruments e.g. is this a process plant or manufacturing
facility? What will people do with this data?
>
> My test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)
> iostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while
> constant insert
>
> Here is the DDL of the measures tables:
> -------------------------------------------------------
> CREATE TABLE measures_0
> (
> "timestamp" timestamp without time zone,
> storedtime timestamp with time zone,
> count smallint,
> "value" smallint[]
> )
> WITH (OIDS=FALSE);
> CREATE INDEX measures_0_1_idx
> ON measures_0
> USING btree
> ((value[1]));
>
> -- Index: measures_0_2_idx
> CREATE INDEX measures_0_2_idx
> ON measures_0
> USING btree
> ((value[2]));
>
> -- Index: measures_0_3_idx
> CREATE INDEX measures_0_3_idx
> ON measures_0
> USING btree
> ((value[3]));
>
> -- Index: measures_0_count_idx
> CREATE INDEX measures_0_count_idx
> ON measures_0
> USING btree
> (count);
>
> -- Index: measures_0_timestamp_idx
> CREATE INDEX measures_0_timestamp_idx
> ON measures_0
> USING btree
> ("timestamp");
>
> -- Index: measures_0_value_idx
> CREATE INDEX measures_0_value_idx
> ON measures_0
> USING btree
> (value);
> -------------------------------------------------------
>
> Regards
>
> Loïc Petit
>
> --------------------------------
>
>


--
H. Hall
ReedyRiver Group LLC
http://www.reedyriver.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Large number of tables slow insert

Just a guess, but have you tried increasing max_fsm_relations ?  This probably shouldn't matter but you'll want this to be larger than the sum of all your tables and indexes and it doesn't take that much memory to increase it.

My next suggestion would be to log in as the superuser and 'vacuum analyze' the system tables.  Perhaps it is simply the system table access that has gotten inefficient with this many tables / indexes.


On Sat, Aug 23, 2008 at 6:48 PM, Loic Petit <tls.wydd@free.fr> wrote:
What I described in the last mail is what I try to do.
But I said earlier that I only do about 3-4 inserts / seconds because of my problem.
So it's about one insert each 30 minutes for each table.

[PERFORM] NOW vs CURRENT_DATE

Hello every body,

I just discover a big not only big huge difference between NOW() and
CURRENT_DATE.

Did you already know about it and do you know why ?

DELETE FROM blacklist where bl_date < (NOW() - interval '2 DAY');
on 6 000 000 of records
699 ms

DELETE FROM blacklist where bl_date < (CURRENT_DATE - interval '2 DAY');
on 6 000 000 of records

0.065 ms

tx

david

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] xml plans for postgres?

On Sunday 24 August 2008 19:19:24 Greg Fausak wrote:
> Is there a document  that describes the direction Postgres
> will take relative to xml technology?

There is some information at http://wiki.postgresql.org/wiki/XML_Todo, but
these items are mainly aiming to complete the approach taken in 8.3. As for
major new directions, this is basically up to user demand and contributing
resources.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] What in the world is happening on spoonbill?

Alvaro Herrera wrote:
> Stefan Kaltenbrunner wrote:
>> Tom Lane wrote:
>
>>> Can you modify the buildfarm's description of that machine to mention
>>> the special malloc debug flags? It'd probably stop me from asking
>>> you this question again ;-)
>> hmm - would take somebody with SQL-level access to do this - the script
>> to update OS/compiler related data is only partially(ie not updating all
>> information) working...
>
> I've changed the compiler to read gcc-malloc-FGJPZ on spoonbill.
>
> BTW this animal has not updated in quite a few days ... is this
> expected?

FWIW: this should be fixed now ...


Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-de-allgemein] Funktion: Ergebnis kommagetrennt

2008/8/24 Thomas Markus <t.markus@proventis.net>:
> Hi,
>
> versuchs mal mit
> select array_to_string(ARRAY(select email from turba_objects),',')
>
> Thomas
>
>
> Andreas Moroder schrieb:
>>
>> Hallo,
>>
>> ich habe eine Abfrage
>> select email from turba_objects.
>>
>> Ich bräuchte eine PL/pgSQL function welche diese Abfrage ausführt und ein
>> String mit allen durch Komma getrennten Mailadressen zurückgibt z.B (
>> abc@test.com, xyz@probe.com usw )
>> Nach dem letzten Eintrag darf kein Beistrich vorkommen.
>>
>> Kann mir jeman dabei helfen ?

Da wirst Du Dir wohl Deine eigene Aggregatfunktionschreiben müssen,
wie im ersten Kommentar bei
http://www.postgresql.org/docs/8.0/interactive/sql-createaggregate.html
beschrieben.

>>
>> Vielen Dank
>> Andreas Moroder

HTH
Robert

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [HACKERS] Extending error-location reports deeper into the system

Gregory Stark <stark@enterprisedb.com> writes:
> "Tom Lane" <tgl@sss.pgh.pa.us> writes:
>> If we didn't set the locations to unknown, then errors complaining about
>> problems arising within a rule would try to print pointers to locations in
>> the calling query's text having the same offsets as the problematic item had
>> in the original CREATE RULE or similar command. Not what we want.

> Just an idle thought... we could include the original source text with the
> rule as well. Though how easy it would be to use when we expand the rule is
> another question.

Then you'd need some way of keeping track of *which* query string
various nodes in the merged query tree were referencing. I'm not
willing to go there, at least not in this pass at the problem.

> The original objection included caveats that there may be other sites that
> have uncertainty about whether to include the line number. Are you sure there
> aren't any?

Well, if there are, we'll find out when we actually try to do the work.
Right now, however, I feel confident that we don't want equality to
depend on syntactic position because it never has in the past.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCHES] VACUUM Improvements - WIP Patch

"Matthew T. O'Connor" <matthew@zeut.net> writes:
> I think everyone agrees that partial vacuums would be useful / *A Good
> Thing* but it's the implementation that is the issue.

I'm not sure how important it will really be once we have support for
dead-space-map-driven vacuum.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Extending error-location reports deeper into the system

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> If we didn't set the locations to unknown, then errors complaining about
> problems arising within a rule would try to print pointers to locations in
> the calling query's text having the same offsets as the problematic item had
> in the original CREATE RULE or similar command. Not what we want.

Just an idle thought... we could include the original source text with the
rule as well. Though how easy it would be to use when we expand the rule is
another question.

It would be useful for the "recompile view" type functionality that people
want for cases like adding a column to an underlying table that they're
reading in the view with "select *". Or people who swap columns around with
alter table and want the view to use new columns that now have the old names.

It would also be useful for debugging if you're looking at pg_dump output of
the reconstructed rule and could compare it with a commented-out original
definition to help understand what the original intent was and why it might
differ from the current definition.

The original objection included caveats that there may be other sites that
have uncertainty about whether to include the line number. Are you sure there
aren't any?

Are you sure there isn't anywhere else lurking that expects equal to really
mean equal? Perhaps someplace that builds a list and expects to eliminate
duplicates?

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] Afinamiento de la Base de Datos.

Buenos dias lista.

Estimado Jaime Casanova

Estuve leyendo un toco sobre este tema y he probado varias veces hasta ahora tunear sin resultados.

Digo hasta ahora por que despues de leer presentacion en .pdf que lleva el nombre del asunto de este mail y tocando 3 parametros, logre aumentar el rendimiento de mi BD exactamente al doble.

Dichos parametros fueron shared_buffer, work_mem y effective_cache_size.

Los cuales toque segun los consejos dados para la cantidad total de RAM que tengo disponible.

El entorno un NoteBook Compaq Evo P. III, 1.2 mhz y 256 de Ram disco de 30 GB con solo 1 gb libre de espacio.

La BD que tiene 14 esquemas, el mas grande cuenta con 70 tablas.
107 vistas y 77 funciones, contiene dos tablas una llamada cabezales con 116820 registros y documentos con 270000 registros sobre la que hice un join con un select all.

Antes del tunning en frio demoraba 4 minutos y calentandola 2.5, despues del mismo 2.4 y 1.3 respectivamente.

shared_buffer=64mb,
work_mem =11mb
effective_cache_size=128mb

Fueron los valores que puse, bueno esta semana voy a probar sobre un servidor de prueba que tengo en casa respetando esa proporcion de valores y despues a tocar los servidores en produccion de mis clientes.

Si obtengo similares resultados despues te lo comento.

Igualmente Jaime y apesar de haber comprendido bien lo que debia hacer por que tu material es muy claro, me gustaria poder ahondar mas en los conceptos y el porque de estos valores, donde podria encontrar material que sea didactico como tu presentacion y profundice el tema.

Creo que el material es bueno para sacarlo a uno del apuro, que creo que es el objetivo, seria bueno que hiciera mencion a material para profundizar los temas alli tratados.

Espero no lo consideres una mala critica ya que como te digo en mi caso me ayudo mucho y lo considero muy bueno, el notebook lo uso para hacer algunas demos y la ultima ves el rendimiento era tan bajo, por las condiciones del notebook, que tenia dos alternativas, o le borraba algo o tuneaba la BD, esta ultima decision fue la que tome apoyado en tu material y resulto.

Atte.
Gabriel Colina


____________________________________________________________________________________
Yahoo! MTV Blog & Rock &gt;¡Cuéntanos tu historia, inspira una canción y gánate un viaje a los Premios MTV! Participa aquí http://mtvla.yahoo.com/
--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

Re: [HACKERS] [PATCHES] VACUUM Improvements - WIP Patch

Joshua D. Drake wrote:
> Merlin Moncure wrote:
>> Well, there doesn't seem to be a TODO for partial/restartable vacuums,
>> which were mentioned upthread. This is a really desirable feature for
>> big databases and removes one of the reasons to partition large
>> tables.
> I would agree that partial vacuums would be very useful.


I think everyone agrees that partial vacuums would be useful / *A Good
Thing* but it's the implementation that is the issue. I was thinking
about Alvaro's recent work to make vacuum deal with TOAST tables
separately, which is almost like a partial vacuum since it effectively
splits the vacuum work up into multiple independent blocks of work, the
limitation obviously being that it can only split the work around
TOAST. Is there anyway that vacuum could work per relfile since we
already split tables into files that are never greater than 1G? I would
think that if Vacuum never had more than 1G of work to do at any given
moment it would make it much more manageable.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Proposal: new border setting in psql

On Sun, 24 Aug 2008 13:22:38 -0400
Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > I suppose it is my fault for mentioning ReST. That was the reason I
> > looked into this but that is not what the final proposal is.
>
> Well, if you can't just paste your output into ReST without having to
> hand-munge it afterwards, then ISTM the argument for having this
> additional bit of complexity in our printing routines really falls flat.

But Tom, you are still treating this as a ReST option. Please, pretend
that I never mentioned ReST. Consider this simply as a proposal to
make a logical extension to the "border [0|1|2]" setting. If you were
going to extend border to 3, what would you do? Adding extra row
dividers and turning dashes into equal signs for the existing row
divider seems pretty logical on its own without referencing any
external formats.

--
D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] proposal sql: labeled function params

On Sun, Aug 24, 2008 at 12:00:01PM -0400, Tom Lane wrote:
> So I feel that the proposal for labeled parameters as such is dead
> in the water, and that the only usefulness this thread has had is
> (re-) exploring the syntactic alternatives available for named params.

FWIW, I think the way that python manages named and labelled params in
a single calling syntax fairly straightforward.

http://docs.python.org/tut/node6.html#SECTION006720000000000000000

Have a ncie day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while
> boarding. Thank you for flying nlogn airlines.

Re: [GENERAL] Array, ANY and Regular Expressions

Also sprach Tom Lane (tgl@sss.pgh.pa.us)
> "Stefan 'Kaishakunin' Schumacher" <stefan@net-tex.de> writes:
> > I have a table with an array column (tags text[]) which I want to
> > select from via ANY and RegExes.
>
> > SELECT tags from zettelkasten where 'Sozialpsychologie' ~* any(tags) order by
> > ident;
>
> Did you look at the matches? I suspect this isn't behaving even
> remotely the way you want, because ~* expects the pattern operand
> on the right.

Indeed, the results of ~* are not stable.

> Since there's no SQL syntax with ANY() on the left of the comparison
> operator, what you'd need to do to make this work is invent a "reverse
> regex" operator that switches its arguments. That'd be pretty trivial
> to do with a one-line plpgsql function under it. (I'm not sure what
> performance would be like though; as the table grows you might find
> yourself needing to change the reversing function to C.)

So there is no builtin way to do a regex search in Arrays?

mit freundlichen Grüßen
Stefan Schumacher
--
http://www.bildungswissenschaft.info

http://www.open-source-tag.de -- Magdeburger Open-Source-Tag
Entwicklung trifft Anwendung -- 11. Oktober 2008

[COMMITTERS] npgsql - Npgsql2: Added xml type support

Log Message:
-----------
Added xml type support

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlProviderManifest.Manifest.xml (r1.5 -> r1.6)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlProviderManifest.Manifest.xml.diff?r1=1.5&r2=1.6)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] npgsql - Npgsql2: Added xml type support and fixed uuid

Log Message:
-----------
Added xml type support and fixed uuid

Modified Files:
--------------
Npgsql2/src/Npgsql:
NpgsqlProviderManifest.cs (r1.6 -> r1.7)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/NpgsqlProviderManifest.cs.diff?r1=1.6&r2=1.7)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

[COMMITTERS] npgsql - Npgsql2: removed completed TODO comment.

Log Message:
-----------
removed completed TODO comment.

Modified Files:
--------------
Npgsql2/src/Npgsql/SqlGenerators:
SqlBaseGenerator.cs (r1.15 -> r1.16)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/npgsql/Npgsql2/src/Npgsql/SqlGenerators/SqlBaseGenerator.cs.diff?r1=1.15&r2=1.16)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] proposal sql: labeled function params

2008/8/24 Tom Lane <tgl@sss.pgh.pa.us>:
> "Pavel Stehule" <pavel.stehule@gmail.com> writes:
>> 2008/8/23 Hannu Krosing <hannu@2ndquadrant.com>:
>>> Why not just use some standard record syntax, like
>
>> do you thing, so is it simpler?
>
> It's not about being "simpler", it's about pointing out that there are
> ways to do what you need without creating compatibility problems and
> without commandeering syntax that, if we were going to commandeer it,
> would be far better used for named params.
>
> IMHO, the use-case for labeled parameters is simply much too narrow
> to justify giving them special syntax if there is any possible way
> to avoid it. We have now seen a couple of ways to do it without
> new syntax, at the cost of a few more lines inside the called function
> to examine its arguments. But the use-cases you've suggested involve
> functions that are complicated enough that that's not going to be any
> big deal.
>
> So I feel that the proposal for labeled parameters as such is dead
> in the water, and that the only usefulness this thread has had is
> (re-) exploring the syntactic alternatives available for named params.

I feel it too.

Regards
Pavel Stehule

>
> regards, tom lane
>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Proposal: new border setting in psql

"D'Arcy J.M. Cain" <darcy@druid.net> writes:
> On Sat, 23 Aug 2008 14:57:50 -0400
> Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> So, quite aside from the question of whether we care to support ReST,
>> my opinion is that this patch fails to do so, and a significantly more
>> invasive patch would be needed to do it.

> I suppose it is my fault for mentioning ReST. That was the reason I
> looked into this but that is not what the final proposal is.

Well, if you can't just paste your output into ReST without having to
hand-munge it afterwards, then ISTM the argument for having this
additional bit of complexity in our printing routines really falls flat.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[HACKERS] Extending error-location reports deeper into the system

We currently have the ability to generate error location pointers, such
as

regression=# select nosuchcolumn from int8_tbl;
ERROR: column "nosuchcolumn" does not exist
LINE 1: select nosuchcolumn from int8_tbl;
^

for grammar-detected syntax errors and for errors during first-level
semantic analysis of certain constructs (variables, operators, functions
are about all IIRC). Beyond that, it's not possible because we don't
include location fields in any node types used in post-parse-analysis
query trees.

There was some discussion about this back in March in connection with
a patch proposed by Peter, and in
http://archives.postgresql.org/pgsql-hackers/2008-03/msg00631.php
I wrote

> the current parser location mechanism stores locations only for
> nodes that appear in raw grammar trees (gram.y output), *not* in
> analyzed expressions (transformExpr output). This was an intentional
> choice based on a couple of factors:
>
> * Once we no longer have the parser input string available, the location
> information would be just so much wasted space.
>
> * It would add a weird special case to the equalfuncs.c routines:
> should location fields be compared? (Probably not, but it seems a bit
> unprincipled to ignore them.) And other places might have comparable
> uncertainties what to do with 'em.

It occurred to me today that one of the foundational assumptions of that
old decision has changed, namely that the original query text isn't
available after parsing. We have recently fixed things so that it
almost always *is* available. So I think a reasonable case can be
made for extending most or all querytree node types to include a
location field, and then using these fields in the way sketched in
the above-referenced message to include location pointers in many more
error messages than we do now.

The point about equalfuncs behavior isn't bothering me a lot at the
moment. It seems clear that we *do* want equal() to ignore location
fields, because one of the main purposes it's used for is to note
whether, eg, "ORDER BY x" and "GROUP BY x" are referring to the same
variable, and of course those two occurrences aren't going to have the
same syntactic position.

Another interesting point is outfuncs/readfuncs behavior. I think that
we'd want outfuncs to print the location fields, because they're
possibly useful for debugging; but readfuncs should ignore the data and
set the fields to -1 (unknown) when reading nodes back in. The reason
for this is that the only application for reading nodes in is sucking in
stored rules, default expressions, etc. And these are exactly the cases
where we indeed no longer have the original source text handy. If we
didn't set the locations to unknown, then errors complaining about
problems arising within a rule would try to print pointers to locations
in the calling query's text having the same offsets as the problematic
item had in the original CREATE RULE or similar command. Not what we
want.

There is going to be some small distributed overhead from adding an
additional integer field to so many common Node types, but I find it
hard to believe that it'd be measurable.

So this all seems doable and probably not a very large task to get the
infrastructure in place, though of course actually extending many error
messages to include location pointers will probably happen piecemeal
over time.

Thoughts, objections? If there are none I'm tempted to work on this
in the week remaining before September commitfest.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] restoring from dump

Hi Jeff,

Quagmire deepens, it seems...

I tried:
  dropdb <myDB>
and it told me:
  dropdb: database removal failed: ERROR:  could not access status of transaction 139602298
  DETAIL:  Could not open file "pg_clog/0085": No such file or directory.

I went into pg_clog directory and issued:
  ls -lt | head
and it told me:
  -rw-------   1 postgres postgres 163840 Aug 24 09:57 0088
  drwx------  11 postgres postgres   4096 Aug 22 13:56 ..
  drwx------   2 postgres postgres   4096 Jun 30 16:03 .
  -rw-------   1 postgres postgres 262144 Jun  1 20:04 0084
  -rw-------   1 postgres postgres 262144 Apr  4 15:48 0083
  -rw-------   1 postgres postgres 262144 Mar 26 18:25 0082
  -rw-------   1 postgres postgres 262144 Mar 26 01:28 0081
  -rw-------   1 postgres postgres 262144 Mar 25 23:05 0080
  -rw-------   1 postgres postgres 262144 Mar 25 20:39 007F

It seems that 0088 was generated at the time very close to
my issuing dropdb.

What are my options now?

Thank you.

Tena Sakai
tsakai@gallo.ucsf.edu



-----Original Message-----
From: Jeff Frost [mailto:jeff@frostconsultingllc.com]
Sent: Sat 8/23/2008 10:29 PM
To: Tena Sakai
Cc: pgsql-admin@postgresql.org
Subject: RE: [ADMIN] restoring from dump

On Sat, 23 Aug 2008, Tena Sakai wrote:

> Hi,
>
> At psql prompt, I tried:
>  drop database myDB;
> and it told me:
>  ERROR:  cannot drop the currently open database
>
> Does this mean I have to issue
>  pg_ctl stop
> before I issue
>  "drop database myDB;"?
> But if I do so, then how would I get to psql prompt
> at all?
>
> How would I get around this catch-22 situation?
> Any advice appreciated.

Just connect to a different database to do your drop.  This is what the
'postgres' database is often used for.  That's why it is sometimes referred to
as the maintenance DB.

Or you can use the dropdb command.  Of course I'd still recommend you rename
the DB till you're sure the restore was successful.


>
> Tena Sakai
> tsakai@gallo.ucsf.edu
>
> -----Original Message-----
> From: Jeff Frost [mailto:jeff@frostconsultingllc.com]
> Sent: Fri 8/22/2008 10:28 PM
> To: Tena Sakai
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] restoring from dump
>
>
>
> Jeff Frost wrote:
>> Tena Sakai wrote:
>>>
>>>> If you want to restore all the databases that were in this
>>>> postgresql installation at the time of the backup, then the
>>>> best thing to do is drop all those DBs before attempting the
>>>> restore.
>>> Yes, the database in question is built and updated continuously
>>> from a several sources.  There was a massive power failure,
>>> a series of them, and things got to be a very inconsistent
>>> state and therefore we need to go back to a reliable, trustworthy
>>> backup and then rebuild from there.
>>>
>>> What I gather, from your comments, all I have to do would to issue
>>> a psql command:
>>>   drop database <dbname>
>>> then repeat what I did from shell prompt, ie.,
>>>   zcat <compressed_file> | psql postgres > restore.out 2 > restore.err
>>>
>>> Would you mind confirming if I am understanding you correctly?
>>>
>> Yes, based on the information you've given us, you should be able to
>> restore the entire database (and any other databases that were in the
>> cluster) by first dropping those databases and then issuing the above
>> command.
>>
>> BTW, if you find yourself with an older version of postgresql, this
>> could be a good opportunity to upgrade.  I'm not sure if you mentioned
>> what version you were using in your original post.
> I should also note that you could rename the database instead of
> dropping it outright, to make sure your restore is effective before
> dropping it.
>
>

--
Jeff Frost, Owner       <jeff@frostconsultingllc.com>
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411     FAX: 916-405-4032

Re: [BUGS] Postgresql v8.3.3 + Perl v5.10

Eugen.Konkov@aldec.com escribió:
> Yes, I am ActivePerl and Win32 Binary PG
> Is it scheduled to fix?

Not yet. As stated, you need to use a supported version of Perl.

On the other hand, you can probably compile Postgres on Windows yourself
to get Perl 5.10 support.

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[pgsql-es-ayuda] Error en instalación 8.3

Saludos a la lista.

Desde hace algún tiempo tenía instalado PostgreSQL 8.2.0 en un Windows Server 2003 R2 SP2, para correr sobre él un ERP que estoy testeando. Para probar si este software aceptaba también correr sobre 8.3, he procedido a desinstalar en el entorno de pruebas e instalar de nuevo.

En resumen, el problema con el que he topado es el siguiente:  Se lanza la instalación desde desde una sesión de usuario con derechos totales (Administrador). Cuando se está completando la instalación e intenta iniciar la BD, la operación se aborta ("Rolling back action") y se cancela la copia de archivos y el resto de configuraciones; finalmente aparece este mensaje de "FatalError": "Installation ended prematurely because of an error". Sólo permanece el usuario del sistema creado para la instalación.
Lo he intentando con las tres versiones de la serie (8.3.0, 8.3.1 y 8.3.3) con el mismo resultado. Para descartar cualquier interferencia de instalaciones anteriores he formateado y reinstalado el equipo desde cero, pero el error se repite.
Al final, he conseguido instalar sin problemas la versión 8.2.9.
Este error se repite también en el proceso de instalación en otro equipo con Windows XP SP2.

Buscando información en los mensajes de la lista, he encontrado que el problema ya se citaba en los posts siguientes y que se quedaron sin solución:

http://archives.postgresql.org/pgsql-es-ayuda/2008-02/msg00524.php
http://archives.postgresql.org/pgsql-es-ayuda/2008-03/msg00363.php
http://archives.postgresql.org/pgsql-es-ayuda/2008-03/msg00482.php
http://archives.postgresql.org/pgsql-es-ayuda/2008-06/msg00310.php

Agradeceré cualquier información sobre el tema.
Saludos.

José Antonio Pons



Textos de los dos avisos de error que aparecen en el visor de sucesos:

Tipo de suceso: Información
Origen del suceso:      MsiInstaller
Categoría del suceso:   Ninguno
Id. suceso:     11708
Fecha:          21/08/2008
Hora:           18:14:05
Usuario:                NET001P01\Administrador
Equipo: NET001P01
Descripción:
Product: PostgreSQL 8.3 -- Installation failed.
Datos:
0000: 7b 42 38 32 33 36 33 32   {B823632
0008: 46 2d 33 42 37 32 2d 34   F-3B72-4
0010: 35 31 34 2d 38 38 36 31   514-8861
0018: 2d 42 39 36 31 43 45 32   -B961CE2
0020: 36 33 32 32 34 7d         63224}

Tipo de suceso: Información
Origen del suceso:      MsiInstaller
Categoría del suceso:   Ninguno
Id. suceso:     11708
Fecha:          21/08/2008
Hora:           18:11:42
Usuario:                NET001P01\Administrador
Equipo: NET001P01
Descripción:
Product: PostgreSQL MUI Wrapper 8.3 -- Installation failed.
Datos:
0000: 7b 34 41 46 46 42 39 46   {4AFFB9F
0008: 38 2d 41 30 37 31 2d 34   8-A071-4
0010: 35 44 45 2d 42 36 35 39   5DE-B659
0018: 2d 36 45 32 45 34 44 45   -6E2E4DE
0020: 32 42 41 42 44 7d         2BABD}

[HACKERS] xml plans for postgres?

Recently Postgres has added a data type 'xml' which allows for
a well formed document to be inserted as a column in a Postgres database.
At the moment it doesnot have much utility, you can *almost* get the
same thing done with a text column. It's obviously the first step.

Is there a document that describes the direction Postgres
will take relative to xml technology? XQuery, XUpdate and XSLT
are very different approaches that could compliment or severely disrupt
Postgres' evolution.

I personally am very excited about the potential but I'd like to get
the pulse of the community.

Thank you,

-g

--
Greg Fausak
greg@thursday.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] What in the world is happening on spoonbill?

Steven Lembark <lembark@wrkhors.com> writes:
>>> Are we in the business of excluding text-based browsers? Or obsolete
>>> ones, for that matter?

> I don't think we would want to be in the business of
> dealing successfully with every quirk of every browser
> ever released.

That's nothing but a straw-man. The point here was to avoid using
constructs that we know won't work on some set of browsers, not to
specifically code around any "quirks". I already suggested a workable
solution that involves no new assumptions at all, which was to put the
added info on the linked-to pages instead of directly on the dashboard.

Now we could do that *and* use tooltips, if we can be fairly sure that
the tooltips will be ignored by browsers that can't handle them as
popups.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [BUGS] BUG #4274: uuid returns duplicate values

"Harald Armin Massa" <haraldarminmassa@gmail.com> writes:
>> I wouldn't trust "old fashioned uuids" anymore.

> BTW: Windows has an integrated GUID creator. Used for a lot of things.
> Used in all windows installations. THAT one should be fairly robust,
> shouldn't it?

[ you just made me spill coffee all over my keyboard... ] It's never
a good idea to use "Windows" and "robust" in the same sentence.

Try googling for "windows duplicate guids" and similar phrases.
I get 226000 hits on that one.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] proposal sql: labeled function params

"Pavel Stehule" <pavel.stehule@gmail.com> writes:
> 2008/8/23 Hannu Krosing <hannu@2ndquadrant.com>:
>> Why not just use some standard record syntax, like

> do you thing, so is it simpler?

It's not about being "simpler", it's about pointing out that there are
ways to do what you need without creating compatibility problems and
without commandeering syntax that, if we were going to commandeer it,
would be far better used for named params.

IMHO, the use-case for labeled parameters is simply much too narrow
to justify giving them special syntax if there is any possible way
to avoid it. We have now seen a couple of ways to do it without
new syntax, at the cost of a few more lines inside the called function
to examine its arguments. But the use-cases you've suggested involve
functions that are complicated enough that that's not going to be any
big deal.

So I feel that the proposal for labeled parameters as such is dead
in the water, and that the only usefulness this thread has had is
(re-) exploring the syntactic alternatives available for named params.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCHES] VACUUM Improvements - WIP Patch

Merlin Moncure wrote:
> On Fri, Aug 22, 2008 at 11:36 PM, Bruce Momjian <bruce@momjian.us> wrote:
>> I assume there is no TODO here.
>
> Well, there doesn't seem to be a TODO for partial/restartable vacuums,
> which were mentioned upthread. This is a really desirable feature for
> big databases and removes one of the reasons to partition large
> tables.

I would agree that partial vacuums would be very useful.

Joshua D. Drake

>
> merlin
>


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] What in the world is happening on spoonbill?

Tom Lane wrote:
> "Joshua D. Drake" <jd@commandprompt.com> writes:
>> Tom Lane wrote:
>>> I'm not sure all browsing setups support tooltips nicely.
>
>> Any half way modern browser that is not text based should support tool tips.
>
> Are we in the business of excluding text-based browsers? Or obsolete
> ones, for that matter?

Shrug, I was just offering that most browsers should support it.


Joshua D. Drake


>
> regards, tom lane
>


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-de-allgemein] Funktion: Ergebnis kommagetrennt

Hi,

versuchs mal mit
select array_to_string(ARRAY(select email from turba_objects),',')

Thomas


Andreas Moroder schrieb:
> Hallo,
>
> ich habe eine Abfrage
>
> select email from turba_objects.
>
> Ich bräuchte eine PL/pgSQL function welche diese Abfrage ausführt und ein String mit allen durch Komma getrennten Mailadressen zurückgibt z.B ( abc@test.com, xyz@probe.com usw )
> Nach dem letzten Eintrag darf kein Beistrich vorkommen.
>
> Kann mir jeman dabei helfen ?
>
> Vielen Dank
> Andreas Moroder
>
>


--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

[pgsql-de-allgemein] Funktion: Ergebnis kommagetrennt

Hallo,

ich habe eine Abfrage

select email from turba_objects.

Ich bräuchte eine PL/pgSQL function welche diese Abfrage ausführt und ein String mit allen durch Komma getrennten Mailadressen zurückgibt z.B ( abc@test.com, xyz@probe.com usw )
Nach dem letzten Eintrag darf kein Beistrich vorkommen.

Kann mir jeman dabei helfen ?

Vielen Dank
Andreas Moroder

--
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser

--
Sent via pgsql-de-allgemein mailing list (pgsql-de-allgemein@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-de-allgemein

Re: [BUGS] Installation Problems

Bernard Stewe napsal(a):
> Hi
>
> We are trying to install postgres 8.3.0 on solaris 9.
>
> The installation process comes up with the following error.

Hi,

Please, could you specify what binary do you use? It seems to me that you
downloaded PG version which is compiled for Solaris 10 Update 4.

Zdenek

--
Zdenek Kotala Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [HACKERS] What in the world is happening on spoonbill?

>> Are we in the business of excluding text-based browsers? Or obsolete
>> ones, for that matter?

I don't think we would want to be in the business of
dealing successfully with every quirk of every browser
ever released.

Another way to look at it is supporting standards:
If graphical browsers support at least HTML and CSS,
maybe ecmascript, then they are supportable. If text
based ones can handle the necessary alt tags then we
can also support them.

Beyond that, do you really want to document and code
around every quirk in MSIE 1.0, Netscape 0.50, or any
of the now-extinct text-based browsers for MSDOS?

--
Steven Lembark 85-09 90th St.
Workhorse Computing Woodhaven, NY, 11421
lembark@wrkhors.com +1 888 359 3508

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] Large number of tables slow insert

Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS
will help you? What about optimization on application level?

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Large number of tables slow insert

hello to all,

I've a question regarding the folowing comments.

How to estimate vacuum aggressiveness ?

It's for me very deficulte to setup the autovaccum setting correctly. It
seems for me that it is not enough aggressive, but when I change the
settings the autovacuum process is almost always running.

So how to setup it, for around 40000 insert, update, delete per 5 minutes

regards

david
Scott Marlowe a écrit :
> On Sat, Aug 23, 2008 at 6:59 PM, Loic Petit <tls.wydd@free.fr> wrote:
>
>> 1 table contains about 5 indexes : timestamp, one for each sensor type - 3,
>> and one for packet counting (measures packet dropping)
>> (I reckon that this is quite heavy, but a least the timestamp and the values
>> are really usefull)
>>
>
> But what's the update rate on these indexes and tables? I'm wondering
> if you're not vacuuming aggresively enough to keep up with bursty
> update patterns
>
>


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Proposal: new border setting in psql

On Sun, 24 Aug 2008 09:16:43 -0400
"Merlin Moncure" <mmoncure@gmail.com> wrote:
> Personally I think it's rather nice to be able to have some extra
> flexibility in how psql prints out data. Maybe, instead of the dry
> and uninformative 'border 2', there could be a set of ouput control
> options. Maybe I want the text aligned but with no border for
> example.

You mean like "\pset border 0" does?

Personally I would love to see user defined display outputs for full
flexibility. Since we already have XML I would suggest using that as a
base and allow filters to process it before output.

That's a much larger job though.

--
D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCHES] VACUUM Improvements - WIP Patch

On Fri, Aug 22, 2008 at 11:36 PM, Bruce Momjian <bruce@momjian.us> wrote:
>
> I assume there is no TODO here.

Well, there doesn't seem to be a TODO for partial/restartable vacuums,
which were mentioned upthread. This is a really desirable feature for
big databases and removes one of the reasons to partition large
tables.

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Proposal: new border setting in psql

On Sun, Aug 24, 2008 at 2:00 AM, D'Arcy J.M. Cain <darcy@druid.net> wrote:
> On Sat, 23 Aug 2008 14:57:50 -0400
> Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Also, having now looked at the proposed patch, it seems clear that it
>> isn't addressing the issue of quoting/escaping at all; so I wonder how
>> this can be considered to be a safely machine-readable format.
>
> It's not a machine readable format. It is a simple display with more
> border lines. Just like "border 2" is like "border 1" with more border
> lines. I'm just following the progression.


Personally I think it's rather nice to be able to have some extra
flexibility in how psql prints out data. Maybe, instead of the dry
and uninformative 'border 2', there could be a set of ouput control
options. Maybe I want the text aligned but with no border for
example.

merlin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] SERIAL datatype

On Aug 21, 2008, at 9:51 PM, Peter Billen wrote:

> My concern is not that the table will become full, but that the
> sequence will be exhausted. Doing INSERT, DELETE, INSERT,
> DELETE ... will exhaust the sequence. What will happen then? Do I
> have to manually re-order my serial values and reset the start
> sequence ID to MAX() + 1?

DELETEs don't use your sequence so will not exhaust it. In practice
only INSERTs do. I saw you mention sequences in combination with
DELETEs a few times, just making sure you're not confused ;)

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,48b14c10243481755132881!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] on delete cascade slowing down delete

On Aug 22, 2008, at 9:45 AM, Ivan Sergio Borgonovo wrote:

> On Fri, 22 Aug 2008 08:48:30 +0200
> Alban Hertroys <dalroi@solfertje.student.utwente.nl> wrote:
>
>>
>>> Is it going to make things faster if I:
>>>
>>> delete from s;
>>> reindex table s;
>
>> Why do you think this step would help you any? There's no index on
>> p to begin with. You'd just be reindexing the auto-generated
>> unique index on s (due to it being a PK).
>
> Sorry I forgot to add the index in the example.
> What if there was an index in s.pid too?
> But mostly... if I delete s will the deletion of p be faster?

Hard to tell without the results from explain analyse. It depends on
what the planner decides to do, but it's often faster than the things
we come up with to work around the planner. As a rule of thumb, if
you're trying to work around the planner it is likely your problem is
caused by something else.

Without an explain plan everything is just speculation really, the
planner is quite smart and it knows your data. It tends to outsmart
the devs.

>>> delete from p;
>
>> And no, this would most likely be slower.
>
> Why?

Because of the extra reindex step. If you'd replace that with an
analyse of p, then it may be faster. Or it may not.

You seem to misinterpret the use case for REINDEX. Read here:
http://www.postgresql.org/docs/8.3/interactive/sql-reindex.html

Especially note the usage scenarios ;)

> Stopping a
> delete from p;
> I can see that actually postgresql is also executing a
> delete from s where pid=$1;
> if s is already empty, and there are no other cascading delete on s,
> the lookup should be faster. I was wondering if that doesn't make a
> difference in terms of performance if
> a) I've an index on pid on both tables
> or
> b) s is already empty
>
> and... should I reindex s if I "delete from s" first if I want some
> speed up on delete from p;
>
> Anyway this looks more and more a dead end once things get more and
> more complicated since it requires too much bookkeeping.

Maybe you shouldn't try to speculate on solutions before you
ascertained what the problem is? People asked for an EXPLAIN ANALYSE,
we can't really help you without that.
With it we can probably exactly tell you what's causing your problem,
if we still need to once you see the results.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,48b147ea243482493511527!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general