Wednesday, September 10, 2008

Re: [SQL] FW: Help- post gress sql error

"Kota, Prasoona" <Prasoona.Kota@CompuCredit.com> writes:
> I am receiving the following error when trying to execute sql
> Error: ERROR: ExecSubPlan: failed to find placeholder for subplan

What Postgres version is that? If it's not a current minor release,
please try updating, because I seem to remember having fixed some
bugs with symptoms like that.

If you can reproduce it on any current release version, please send
a complete test case (the query alone is pretty useless without
tables to try it on) to pgsql-bugs.

regards, tom lane

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [ADMIN] [GENERAL] FW: How to upload data to postgres

----------- Original message ----------------------
From: "Markova, Nina" <nmarkova@NRCan.gc.ca>
>
> I also plan to try to export data in XML format (from Ingres) and import
> it to Postgres.
>
> I didn't find any utility for importing XML data into Postgres. Or just
> looking at the wrong document?
> I run Postgres 8.2.4
>
> Thanks,
> Nina
>

I don't know how to do that. Back to your original problem, below is your copy statement from Ingres:

copy site(
sta= varchar(0)tab,
>
offdate= varchar(0)tab,
lat= c0tab,
lon= c0tab,
elev= c0tab,
regist_code= varchar(0)tab,
vault_cond= varchar(0)tab,
geology= varchar(0)tab,
comment= varchar(0)tab,
initials= varchar(0)tab,
lddate= c0nl,
nl= d0nl)
into '/tmp/site.dba'

According to the information below you should be able to change the varchar datatypes to char(0)tab and eliminate the length specifier. Seems this can be done for all data types and will produce a file with string representations of the data. The downside is the strings are padded to width of the column.

http://docs.ingres.com/sqlref/ColumnFormats#o1232

---
> --
> Adrian Klaver
> aklaver@comcast.net


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [pgsql-es-ayuda] Lectura del catalogo

Arturo Rossodivita escribió:

> for(i = 0; i < nnames; i++)
> {
> strcpy(cad,TextDatumGetCString(names[i]));
> }

DatumGetCString()

--
Alvaro Herrera http://www.amazon.com/gp/registry/3BP7BYG9PUGI8
"Changing the world ... one keyboard at a time!"
(www.DVzine.org)
--
TIP 3: Si encontraste la respuesta a tu problema, publícala, otros te lo agradecerán

Re: [GENERAL] "Stuck" query

On Sep 10, 2008, at 5:57 AM, Tommy Gildseth wrote:

> Tom Lane wrote:
>> Tommy Gildseth <tommy.gildseth@usit.uio.no> writes:
>>> Richard Huxton wrote:
>>>> Looks like part of your query results being sent. Is it hung in
>>>> that one
>>>> system-call?
>>> Yes, I left it there for about ~1 hour, and that was all that ever
>>> came.
>> Seems like you have got a network issue. What does netstat show
>> for the
>> status of that connection?
>> I don't think that a query cancel will blow PG off the send; you'd
>> probably have to resort to kill -9 on that process (with a consequent
>> restart of other sessions). It's odd that the kernel hasn't given up
>> on the connection yet ...
>
>
> Netstat showed:
>
> netstat -a --tcp -p | grep 49004
> tcp 0 44660 dbserver:postgres clientserver:49004 ESTABLISHED
> 17504/postgres: nav
>
> I went back to the server the client was running on to double check,
> and it seems the client process hadn't been killed off when the
> application was restarted.
>
> We've got some scheduled downtime tomorrow, so I think I'll just
> leave it till then, since it's not causing any problems as far as I
> can tell.

For what it's worth, I've run into a situation similar to this with a
client a couple time in the last week or two (I can't say identical as
I don't know all of the details about the client end of your
connection). Using the client port # you can use lsof in addition to
netstat (lsof -i tcp:49004) to track down the client process. In our
case, the client process was a connection made via an ssh tunnel and
was sitting in FIN_WAIT2 status. Killing the client process
individually made everything go away nicely without any kind of extra
downtime necessary.

Erik Jones>, Database Administrator
Engine Yard
Support, Scalability, Reliability
(415) 963-4410 x 260
Location: US/Pacific
IRC: mage2k


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[HACKERS] Interesting glitch in autovacuum

I observed a curious bug in autovac just now. Since plain vacuum avoids
calling GetTransactionSnapshot, an autovac worker that happens not to
analyze any tables will never call GetTransactionSnapshot at all.
This means it will arrive at vac_update_datfrozenxid with
RecentGlobalXmin never having been changed from its boot value of
FirstNormalTransactionId, which means that it will fail to update the
database's datfrozenxid ... or, if the current value of datfrozenxid
is past 2 billion, that it will improperly advance datfrozenxid to
sometime in the future.

Once you get into this state in a reasonably idle database such as
template1, autovac is completely dead in the water: if it thinks
template1 needs to be vacuumed for wraparound, then every subsequent
worker will be launched at template1, every one will fail to advance
its datfrozenxid, rinse and repeat. Even before that happens, the
DB's datfrozenxid will prevent clog truncation, which might explain
some of the recent complaints.

I've only directly tested this in HEAD, but I suspect the problem goes
back a ways.

On reflection I'm not even sure that this is strictly an autovacuum
bug. It can be cast more generically as "RecentGlobalXmin getting
used without ever having been set", and it sure looks to me like the
HOT patch may have introduced a few risks of that sort.

I'm thinking that maybe an appropriate fix is to insert a
GetTransactionSnapshot call at the beginning of InitPostgres'
transaction, thus ensuring that every backend has some vaguely sane
value for RecentGlobalXmin before it tries to do any database access.

Another thought is that even with that, an autovac worker is likely
to reach vac_update_datfrozenxid with a RecentGlobalXmin value that
was computed at the start of its run, and is thus rather old.
I wonder why vac_update_datfrozenxid is using the variable at all
rather than doing GetOldestXmin? It's not like that function is
so performance-critical that it needs to avoid calling GetOldestXmin.

Lastly, now that we have the PROC_IN_VACUUM test in GetSnapshotData,
is it actually necessary for lazy vacuum to avoid setting a snapshot?
It seems like it might be a good idea for it to do so in order to
keep its RecentGlobalXmin reasonably current.

I've only looked at this in HEAD, but I am thinking that we have
a real problem here in both HEAD and 8.3. I'm less sure how bad
things are in the older branches.

Comments?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH] Cleanup of GUC units code

> a) Verbosely spelling out the units in the default config file
> temp_buffers = 16 megabytes
> or
> temp_buffers = 16 milliblocks :-)
> Naive users who favor cut&paste will use the verbose words
> that should leave little room for confusion. Power-users
> who know the short forms from the docs will presumably have
> read the descriptions.

I think it would make a lot of sense to encourage adding the word
"buffers" or "blocks" when that is the unit in question. This is all
religion at this point, but I find it difficult to believe that there
is any real need to spell out megabytes.

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] New FSM patch

Zdenek Kotala wrote:
> Yesterday, I started to reviewing your patch.

Thanks!

> 1) If I understand correctly the main goal is to improve FSM to cover
> all pages in file which is useful for huge database.

That's not a goal per se, though it's true that the new FSM does cover
all pages. The goals are to:
- eliminate max_fsm_pages and max_fsm_relations GUC variables, so that
there's one thing less to configure
- make the FSM immediately available and useful after recovery (eg. warm
standby)
- make it possible to retail update the FSM, which will be needed for
partial vacuum

> 2) Did you perform any benchmark? Is there any performance improvement
> or penalty?

Working on it.. I've benchmarked some bulk-insertion scenarios, and the
new FSM is now comparable to the current implementation on those tests.
See the o

I've also been working on a low level benchmark using a C user-defined
function that exercises just the FSM, showing the very raw CPU
performance vs. current implementation. More on that later, but ATM it
looks like the new implementation can be faster or slower than the
current one, depending on the table size.

The biggest potential performance issue, however, is the fact that the
new FSM implementation is WAL-logged. That shows up dramatically in the
raw test where there's no other activity than FSM lookups and updates,
but will be much less interesting in real life where FSM lookups are
always related to some other updates which are WAL-logged anyway.

I also ran some DBT-2 tests without think times, with a small number of
warehouses. But the results of that had such a high variability from
test to test, that any difference in FSM speed would've been lost in the
noise.

Do you still have the iGen setup available? Want to give it a shot?

> 3) How it works when database has many active parallel connections?

The new FSM should in principle scale better than the old one. However,
Simon raised a worry about the WAL-logging: WALInserLock can already
become the bottleneck in OLTP-scenarios with very high load and many
CPUs. The FSM isn't any worse than other actions that generate WAL, but
naturally if you're bottlenecked by the WAL lock or bandwidth, any
increase in WAL traffic will show up as an overall performance loss.

I'm not too worried about that, myself, because in typical scenarios the
extra WAL traffic generated by the FSM should be insignificant in volume
compared to all the other WAL traffic. But Simon will probably demand
some hard evidence of that ;-).

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[ADMIN] dynamic SQL

Hi list,
I need to create a sql where the name of a tabla is on a variable..

vbuffer        varchar;

SELECT     buffer
INTO     vbuffer
FROM     rutas r, unidades u
WHERE    r.codigo_ruta = u.codigo_ruta AND
u.codigo_uni = 'B15_AFF666';
   

SELECT within(the_geom,(SELECT geomunion(the_geom) from var vbuffer)) as inside

Thanks..

Re: [GENERAL] You need to rebuild PostgreSQL using --with-libxml.

On Wed, 2008-09-10 at 08:57 -0430, Ricardo Antonio Yepez Jimenez wrote:
> As I compile postgresql 8.3.2

If this is not a typo, please use 8.3.3 .

> to support sql / xml, Red hat 5.1
> enterprise edition, I need to know the steps to comfigurarlo

Why don't you use precompiled packages for RHEL + PostgreSQL 8.3.3,
which include xml support?

http://yum.pgsqlrpms.org

Regards
--
Devrim GÜNDÜZ, RHCE
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org

Re: [GENERAL] 64-bit Compile Failure on Solaris 10 with OpenSSL

On Wed, September 10, 2008 10:54 am, Zdenek Kotala wrote:
>>> Three questions (yeah, you forbided ask, but ...)
>>
>> grumble grumble grumble...
>>
>>> 1) Why 64
>>>
>>> 64bit code on SPARC is slower, because SPARC uses 4byte instructions
>>> and processing 64bit data needs more instructions. It is good only if
>>> you need more then 4GB share memory. When you use sunstudio compiler
>>> with best optimization 64bit application has 1%-5% performance
>>> degradation.
>>
>> A. Many databases use more than 4GB share memory.
>
> Of course but you mention that you have only 4GB RAM.

This is my test machine :-)

>> B. Re: SunStudio - that's why I'm using GCC.
>
> I don't understand you there. Sunstudio generates better code on SPARC
> and it is faster than code produced with GCC.

I read your statement too fast. Nevermind my response to that.

>>> 2) Why you don't use package
>>>
>>> You can use Solaris'es packages, which are integrated and optimized
>>> for Solaris.
>>
>> Which are bloated with stuff I don't need and missing stuff I do. Not
>> to mention terribly outdated.
>
> Could you be more specific? If is there something what you missing or
> what is wrong in Solaris'es packages let me know. Maybe I can improve it.

I just don't like the Solaris package system in general. It is, dare I
say, worse than RPM. But this is a PostgreSQL list, so I'll save the rant!

>>> 3) Why you don't use build-in libssl?
>>>
>>> Integrated libssl is not only copy of original open ssl. It has lot
>>> of improvements and it uses crypto hardware accelerator if you have
>>> it (for example Niagara 2).
>>
>> But it is 32-bit.
>
> No, You have 64bit version in /usr/sfw/lib/64.

I did not know that! I need to check it out later. Thanks for the tip.

Randy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] What's size of your PostgreSQL Database?

Yes, we know both Greenplum and Netezza are PostgreSQL based MPP solutions, but they are commercial packages.
I'd like to know are there open source ones, and I would suggest the PostgreSQL Team to start a MPP version of PostgreSQL.

--------------------------------------------------
From: "Joshua Drake" <jd@commandprompt.com>
Sent: Wednesday, September 10, 2008 11:27 PM
To: "Amber" <guxiaobo1982@hotmail.com>
Cc: "Mark Roberts" <mailing_lists@pandapocket.com>; <pgsql-general@postgresql.org>
Subject: Re: [GENERAL] What's size of your PostgreSQL Database?

> On Wed, 10 Sep 2008 23:17:40 +0800
> "Amber" <guxiaobo1982@hotmail.com> wrote:
>>
>> 1. Some kind of MPP.
>> 2. No single point of failure.
>> 3. Convenient and multiple access interfaces.
>>
>> And following the is the solutions we have examined:
>
> http://www.greenplum.com/
>
> Joshua D. Drake
>
>
> --
> The PostgreSQL Company since 1997: http://www.commandprompt.com/
> PostgreSQL Community Conference: http://www.postgresqlconference.org/
> United States PostgreSQL Association: http://www.postgresql.us/
> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
>
>
>
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Base64 decode/encode performance

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Wed, Sep 10, 2008 at 10:44:00AM -0400, Mark Mielke wrote:
> Marko Kreen wrote:
[...]
>> - decode does not seem to handle architectures that segfault
>> on unaligned int32 accesses.
>
> Out of curiosity - does this problem exist on any platform for which
> PostgreSQL is currently ported and supported?

HP PA is one of them. Besides, some others hide that behind a painful
(more than a factor of 1.3 -- we are talking software emulation here)
performance hit. More modeern architectures (Alpha, Itanium, IA-64) take
this route.

Regards
- -- tomás
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFIx+ggBcgs9XrR2kYRAha9AJ0Xy6Zg/m76H2H4Uzta3pSXJh/D2gCfS8PF
vpDQMU8gg2BahURgSI97GSk=
=KnIm
-----END PGP SIGNATURE-----

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] [GENERAL] FW: How to upload data to postgres

I also plan to try to export data in XML format (from Ingres) and import
it to Postgres.

I didn't find any utility for importing XML data into Postgres. Or just
looking at the wrong document?
I run Postgres 8.2.4

Thanks,
Nina

-----Original Message-----
From: Adrian Klaver [mailto:aklaver@comcast.net]
Sent: September 10, 2008 10:39
To: pgsql-general@postgresql.org
Cc: Markova, Nina; pgsql-admin@postgresql.org
Subject: Re: [GENERAL] FW: How to upload data to postgres

On Wednesday 10 September 2008 7:14:50 am Markova, Nina wrote:
> Thanks Adrian.
>
> I have read the Postgres 'copy' - the problem is that Postgres doesn't

> understand Ingres format. This is I think where the failure comes
from.
> If I don't find a tool, I have to write scripts to convert data to
> something postgres understand.
>
> In the Ingres file with data for each varchar field, before the field
> is the real size :
>
> 48070 820010601 820030210 41.890
> -80.811 0.000 1U
> 3A16 819871030 0 47.471 -70.006
> 0.015 1R 0
>
> In the example above:
> 3A16 - means for varchar(5) field there are only characters, i.e. A16

> 48070 - means for varchar(5) field there are only 4 characters, i.e.
> 8070
> 819871030 - 8 characters, i.e. 19871030

That would be the problem. The COPY from Postgres does not understand
the metadata associated with the field data and would try to insert the
complete string. I can see three options:
1) As has been suggested in another other post, export the Ingres data
as data only CSV i.e 'A16' not '3A16'
2) Your suggestion of cleaning up data via a script.
3) Create holding table in Postgres that has varchar() fields (varchar
with no length specified) and import into and then do your data cleanup
before moving over to final table.

>
> When I created the same table in Postgres, inserted some test data
> and later copied it to a file, this is how it looks like:
>
> A16 19871030 47.471 -70.006 0.015 R
> KLNO 19801028 47.473 -70.006 0.016 R
> MLNO 19801028 19990101 47.413 -70.006 0.016 R
>
> Column | Type | Modifiers
>
> -------------+------------------------+-------------------------------
> -------------+------------------------+--
> -------
> sta | character varying(5) | not null
> ondate | character varying(8) | not null
> offdate | character varying(8) | not null
> lat | double precision | not null
> lon | double precision | not null
> elev | double precision | not null default 0
> regist_code | character(1) | not null default ' '::bpchar
>
>
> Nina
>

--
Adrian Klaver
aklaver@comcast.net

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [GENERAL] What's size of your PostgreSQL Database?

On Wed, 10 Sep 2008 23:17:40 +0800
"Amber" <guxiaobo1982@hotmail.com> wrote:
>
> 1. Some kind of MPP.
> 2. No single point of failure.
> 3. Convenient and multiple access interfaces.
>
> And following the is the solutions we have examined:

http://www.greenplum.com/

Joshua D. Drake


--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] What's size of your PostgreSQL Database?

> Yahoo has a 2PB Postgres single instance Postgres database (modified
> engine), but the biggest pure Pg single instance I've heard of is 4TB.
> The 4TB database has the additional interesting property in that they've
> done none of the standard "scalable" architecture changes (such as
> partitioning, etc). To me, this is really a shining example that even
> naive Postgres databases can scale to as much hardware as you're willing
> to throw at them. Of course, clever solutions will get you much more
> bang for your hardware buck.

Can you share some ideas of the particular design of the 4T db, it sounds very interesting :)
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Base64 decode/encode performance

On Wed, Sep 10, 2008 at 10:44:00AM -0400, Mark Mielke wrote:
> >There are 2 killer problems:
> >
> >- decode does not seem to handle architectures that segfault
> > on unaligned int32 accesses.
>
> Out of curiosity - does this problem exist on any platform for which
> PostgreSQL is currently ported and supported?

It exists on most CPUs actually like Alpha/Sparc/MIPS, just not on
Intel chips, which is why you don't see them very often. Unaligned
accesses do take twice as long to execute though, even on Intel chips.
On some OSes the unaligned access is trapped and emulated by the OS,
which doesn't do much for performance.

Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while
> boarding. Thank you for flying nlogn airlines.

Re: [pgadmin-support] Feature Request: query history

soundy good - I'd like such a feature as well. We'll have to limit it to a
maximum number of queries, though (50,100,200?) so the history doesn't
grow infinitely.

Andreas

> Yeah, that sounds good. It would be great if it stored the history across
> sessions too so if you close the app you can still get to the previous
> queries later.
>
> John
>
> My half-patch had a menu, kind of like the history menu in firefox (or
>> whatever browser - I'm sure they all have it). But the same principle.
>>
>


--
Andreas Neumann
Böschacherstrasse 6, CH-8624 Grüt/Gossau, Switzerland
Email: a.neumann@carto.net, Web:
* http://www.carto.net/ (Carto and SVG resources)
* http://www.carto.net/neumann/ (personal page)
* http://www.svgopen.org/ (SVG Open Conference)
* http://www.geofoto.ch/ (Georeferenced Photos of Switzerland)


--
Sent via pgadmin-support mailing list (pgadmin-support@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-support

Re: [GENERAL] What's size of your PostgreSQL Database?

> 8. We have a master and a replica. We have plans to move to a
> cluster/grid Soon(TM). It's not an emergency and Postgres can easily
> handle and scale to a 3TB database on reasonable hardware (<$30k).
>

I'd like to know what's your progress of choosing the cluster/grid solution, we are also looking for
an appropriate one, following is the our major factors of the ideal solution.

1. Some kind of MPP.
2. No single point of failure.
3. Convenient and multiple access interfaces.

And following the is the solutions we have examined:

1. Slony-I: Not a MPP solution, and using triggers to detect changes, which defects performance.
2. pgpool-II: Some kind of MPP, but join operations can't be done on multiple machines parallelly, that is it can't scale out well.
3. Sequoia : The same problem as pgpool-II, and the major access interface is JDBC.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] New FSM patch

Heikki Linnakangas napsal(a):
> Here's an updated FSM patch. Changes since last patch:
>

Yesterday, I started to reviewing your patch. At the beginning I have
general questions:

1) If I understand correctly the main goal is to improve FSM to cover
all pages in file which is useful for huge database.

2) Did you perform any benchmark? Is there any performance improvement
or penalty?

3) How it works when database has many active parallel connections?


Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] [PgFoundry] Unsigned Data Types [1 of 2]

Hello Jaime,

It is taking longer than I expected to implement the scalarltsel and
scalargtsel functions for the unsigned integer data type.
I am still working on this solution and hope to have an updated patch
later this week (or over the weekend at the latest).

Just wanted to keep you updated on my status.

Thanks,

- Ryan

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PERFORM] too many clog files

> "Matt Smiley" <mss@rentrak.com> wrote:
> Alvaro Herrera wrote:
>> Move the old clog files back where they were, and run VACUUM FREEZE
in
>> all your databases. That should clean up all the old pg_clog files,
if
>> you're really that desperate.
>
> Has anyone actually seen a CLOG file get removed under 8.2 or 8.3?

Some of my high-volume databases don't quite go back to 0000, but this
does seem to be a problem. I have confirmed that VACUUM FREEZE on all
but template0 (which doesn't allow connections) does not clean them
up. No long running transactions are present.

-Kevin

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [GENERAL] plpgsql return select from multiple tables

Artis Caune escribió:

> Ops, forget to mention that this function is not so simple and use
> some plpgsql features.

Ah, right, you only forgot to mention that other 99% of the
requirements.

What's wrong with your first example?

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] 64-bit Compile Failure on Solaris 10 with OpenSSL

Randal T. Rioux napsal(a):
> On Tue, September 9, 2008 5:25 am, Zdenek Kotala wrote:
>> Randal T. Rioux napsal(a):
>>> I've battled this for a while. I'm finally breaking down and asking for
>>> help.
>>>
>>> If you're answer to this is "why 64-bit" then don't answer. You wouldn't
>>> understand. Same if you say "why don't you use packages."
>>>
>>> Here is my scenerio:
>>>
>>> - Sun 420R x450Mhz UltraSPARC-II / 4GB RAM
>>> - Solaris 10 05/08
>>> - OpenSSL 0.9.8h
>>> - PostgreSQL 8.3.3
>>> - GCC 3.4.6
>>> - GNU Make 3.81
>> Three questions (yeah, you forbided ask, but ...)
>
> grumble grumble grumble...
>
>> 1) Why 64
>>
>> 64bit code on SPARC is slower, because SPARC uses 4byte instructions and
>> processing 64bit data needs more instructions. It is good only if you
>> need more then 4GB share memory. When you use sunstudio compiler with
>> best optimization 64bit application has 1%-5% performance degradation.
>
> A. Many databases use more than 4GB share memory.

Of course but you mention that you have only 4GB RAM.

> B. Re: SunStudio - that's why I'm using GCC.

I don't understand you there. Sunstudio generates better code on SPARC
and it is faster than code produced with GCC.

>> 2) Why you don't use package
>>
>> You can use Solaris'es packages, which are integrated and optimized for
>> Solaris.
>
> Which are bloated with stuff I don't need and missing stuff I do. Not to
> mention terribly outdated.

Could you be more specific? If is there something what you missing or
what is wrong in Solaris'es packages let me know. Maybe I can improve it.

>> 3) Why you don't use build-in libssl?
>>
>> Integrated libssl is not only copy of original open ssl. It has lot of
>> improvements and it uses crypto hardware accelerator if you have it (for
>> example Niagara 2).
>
> But it is 32-bit.

No, You have 64bit version in /usr/sfw/lib/64.

Zdenek

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [PERFORM] Effects of setting linux block device readahead size

On Tue, 9 Sep 2008, Mark Wong wrote:

> I've started to display the effects of changing the Linux block device
> readahead buffer to the sequential read performance using fio.

Ah ha, told you that was your missing tunable. I'd really like to see the
whole table of one disk numbers re-run when you get a chance. The
reversed ratio there on ext2 (59MB read/92MB write) was what tipped me off
that something wasn't quite right initially, and until that's fixed it's
hard to analyze the rest.

Based on your initial data, I'd say that the two useful read-ahead
settings for this system are 1024KB (conservative but a big improvement)
and 8192KB (point of diminishing returns). The one-disk table you've got
(labeled with what the default read-ahead is) and new tables at those two
values would really flesh out what each disk is capable of.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PATCHES] hash index improving v3

Alex Hunsaker napsal(a):

>
> wide:
> # NOTE not on the same machine as the "narrow" test was run!
>
> # spit out 2, 000, 000 random 100 length strings
> perl gen.pl > data.sql
> create table test_hash (wide text);
> copy test_hash from './data.sql';
> create index test_hash_num_idx on test_hash using hash (wide);
>
> bench.sql:
> select a.wide from test_hash as a inner join test_hash as b on b.wide
> = a.wide where a.wide =
> 'BJNORSLMITGKHJCWDBLKLYRSJTVPTYXZJPWNBKXGHYFNDHRAKNFMDHRMUXLDXNTRBJMTHPGPBFJZPAENZXDHAHCUSCJTUPUXWCXUH';
>
> # ^ that string is in data.sql
>
> # 3 runs each
> pgbench -c1 -n -t100000 -f bench.sql
> cvs head: tps = 5073.463498, 5110.620923, 4955.347610
> v5: tps = 5870.681336, 5740.007837, 5699.002942

What locale did you use? It would be nice to have also comparing between
C and any UTF8 locale. I think we should see big benefit when non C
locale is used.

Thanks Zdenek

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

Martijn van Oosterhout wrote:
> On Wed, Sep 10, 2008 at 09:45:04AM -0400, Jack Orenstein wrote:
>> Am I on the right track -- does autocommit = false for the BIG scan force
>> versions of TINY to accumulate? I played around with a JDBC test program,
>> and so far cannot see how the autocommit mode causes variations in what is
>> seen by the scan. The behavior I've observed is consistent with the
>> SERIALIZABLE isolation level, but 1) I thought the default was READ
>> COMMITTED, and 2) why does the accumulation of row versions have anything
>> to do with autocommit mode (as opposed to isolation level) on a connection
>> used for the scan?
>
> Vacuum can only clean up stuff older than the oldest open transaction.
> So if you have a transaction which is open for hours then stuff made
> since then it can't be vacuumed. The solution is: don't do that.
>
> What I don't understand from your description is why your scan is slow

Application requirement. We need to do something for each row retrieved from BIG
and the something is expensive. We do the scan slowly (30 second sleep inside
the loop) to amortize the cost.

> and how the autocommit relates to this. Postgresql only cares about
> when you start and commit transactions, and I can't get from your
> description when exactly that happens.

If the slow scan is done with autocommit = true, then the transactions updating
BIG and TINY run with no degradation in performance (as long as TINY is vacuumed
frequently).

If the slow scan is done with autocommit = false, then the transactions updating
BIG and TINY get slower and slower and the TINY table's file bloats.

I guess the question is this: What are the transaction boundaries for a scan
done with autocommit = false? (The connection has autcommit false, and the
connection is used for nothing but the scan.)

Jack

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Base64 decode/encode performance

Marko Kreen wrote:
> (Note: the b64encode there reads 3 chars at a time, b64decode int32
> at a time.)
>
> There are 2 killer problems:
>
> - decode does not seem to handle architectures that segfault
> on unaligned int32 accesses.

Out of curiosity - does this problem exist on any platform for which
PostgreSQL is currently ported and supported?

Cheers,
mark

--
Mark Mielke <mark@mielke.cc>


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] plpgsql return select from multiple tables

2008/9/10 Artis Caune <artis.caune@gmail.com>:
> Hi,
>
> What is the correct way of writing plpgsql function which needs return
> columns from multiple tables?
>
> e.x.:
> SELECT email FROM emails WHERE id = 1
> SELECT backend FROM backends WHERE id = 1
>
> I need plpgsql function return both email and backend in one line, like:
> SELECT email, backend FROM ...
>

in principle, you don't need procedural language for this:

SELECT
(SELECT email FROM emails WHERE id = 1) as email,
(SELECT backend FROM backends WHERE id = 1) as backend;


>
> I do like this:
>
> CREATE OR REPLACE FUNCTION get_user_data( INT )
> RETURNS SETOF RECORD AS $$
> DECLARE
> v_email RECORD;
> v_backend RECORD;
> BEGIN
> SELECT email
> INTO v_email
> FROM emails
> WHERE id = $1;
>
> SELECT backend
> INTO v_backend
> FROM backends
> WHERE id = $1;
>
> RETURN QUERY SELECT v_email AS email,
> v_backend AS backend;
> END;
> $$ LANGUAGE 'plpgsql' SECURITY DEFINER;

nothing wrong here but this can also be rewritten to pure SQL function
(can be few percent faster and optimizable by planner)

CREATE OR REPLACE FUNCTION get_user_data( INT )
RETURNS SETOF RECORD AS $$
SELECT
(SELECT email FROM emails WHERE id = $1) as email,
(SELECT backend FROM backends WHERE id = $1) as backend
$$ LANGUAGE 'sql' STABLE STRICT SECURITY DEFINER;


one question, why SETOF? this is supposed to always return one row
always, right?
you could create a TYPE and return this. queries would be a bit simpler:

SELECT * FROM get_user_data('${id}');


finally, I am *almost* sure (maybe someone will correct me) that if
you encapsulate this in a function, you will always have some
performance penalty because
SELECT email FROM get_user_data('${id}');
will always scan backends table, even if it's not needed.

for such usage, VIEWs are nicer.

create view user_data as
select u.id, e.email, b.backend
from users u [left?] join emails e on e.id=u.id [left?] join backends
b on b.id = u.id;

and

select * from user_data where id=1;


>
>
> and then doing selects:
> SELECT * FROM get_user_data('${id}') AS (email VARCHAR, backend VARCHAR)
>
>
> Is it okay, there will be a lot of those queries?
>
>
>
>
> --
> regards,
> Artis Caune
>
> <----. CCNA
> <----|====================
> <----' didii FreeBSD
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

--
Filip Rembiałkowski

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] [PATCH] Cleanup of GUC units code

Robert Haas wrote:
>bits...bytes...blocks...m...M
>
> I can't imagine that taking away the "B" is somehow going to
> be more clear.

If clarity is the goal, I'd want the following:

a) Verbosely spelling out the units in the default config file

temp_buffers = 16 megabytes
or
temp_buffers = 16 milliblocks :-)

Naive users who favor cut&paste will use the verbose words
that should leave little room for confusion. Power-users
who know the short forms from the docs will presumably have
read the descriptions.

b) having "show" show verbosely spelled out units.
db=# show temp_buffers;
temp_buffers
--------------
16000000 bytes
(1 row)

c) having "set" show a NOTICE with the verbose word for the units
db=# set temp_buffers = '16mb';
NOTICE: setting temp_buffers to 16000000 bytes


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [ADMIN] [GENERAL] FW: How to upload data to postgres

On Wednesday 10 September 2008 7:14:50 am Markova, Nina wrote:
> Thanks Adrian.
>
> I have read the Postgres 'copy' - the problem is that Postgres doesn't
> understand Ingres format. This is I think where the failure comes from.
> If I don't find a tool, I have to write scripts to convert data to
> something postgres understand.
>
> In the Ingres file with data for each varchar field, before the field is
> the real size :
>
> 48070 820010601 820030210 41.890
> -80.811 0.000 1U
> 3A16 819871030 0 47.471 -70.006
> 0.015 1R 0
>
> In the example above:
> 3A16 - means for varchar(5) field there are only characters, i.e. A16
> 48070 - means for varchar(5) field there are only 4 characters, i.e.
> 8070
> 819871030 - 8 characters, i.e. 19871030

That would be the problem. The COPY from Postgres does not understand the
metadata associated with the field data and would try to insert the complete
string. I can see three options:
1) As has been suggested in another other post, export the Ingres data as data
only CSV i.e 'A16' not '3A16'
2) Your suggestion of cleaning up data via a script.
3) Create holding table in Postgres that has varchar() fields (varchar with no
length specified) and import into and then do your data cleanup before moving
over to final table.

>
> When I created the same table in Postgres, inserted some test data and
> later copied it to a file, this is how it looks like:
>
> A16 19871030 47.471 -70.006 0.015 R
> KLNO 19801028 47.473 -70.006 0.016 R
> MLNO 19801028 19990101 47.413 -70.006 0.016 R
>
> Column | Type | Modifiers
>
> -------------+------------------------+---------------------------------
> -------
> sta | character varying(5) | not null
> ondate | character varying(8) | not null
> offdate | character varying(8) | not null
> lat | double precision | not null
> lon | double precision | not null
> elev | double precision | not null default 0
> regist_code | character(1) | not null default ' '::bpchar
>
>
> Nina
>

--
Adrian Klaver
aklaver@comcast.net

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [NOVICE] Dump/Restore compatibility

--- On Wed, 9/10/08, Steve T <steve@retsol.co.uk> wrote:

> Does anyone here know about this or should I post this on
> another list?
>
>
> On Tue, 2008-08-26 at 08:57 +0100, Steve T wrote:
>
> > I am building a new server that is running PostgreSQL
> 8.1.11 under
> > RHEL5. My current 'live' server is PostgreSQL
> 8.0.3 under FC4 with an
> > FC4 8.0.3 backup server.
> > What I want to do is to swap the RHEL5 (ie 8.1.11) in
> as the live and
> > demote the live to backup and the backup to
> 'spare'. Currently, the
> > live databases are dumped and restored onto the backup
> box. Can I do
> > the same across the releases - ie dump the 8.1.11 and
> restore that
> > dump on 8.0.3?
> >

I can't say for sure but I also can't believe that you could restore an 8.1 dump on an 8.0 database.

Bruce


--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [GENERAL] plpgsql return select from multiple tables

On Wed, Sep 10, 2008 at 5:26 PM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
> Hmm, maybe
>
> select email, backend from emails, backends where email.id = 1 and
> backend.id = 1;
> ?
>
> You don't need a plpgsql function for this ...

Ops, forget to mention that this function is not so simple and use
some plpgsql features.
Here is one of them:
http://dpaste.com/hold/77192/


--
regards,
Artis Caune

<----. CCNA
<----|====================
<----' didii FreeBSD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Base64 decode/encode performance

On 9/10/08, Gaetano Mendola <mendola@gmail.com> wrote:
> I have been experimenting with some base64 encoding/decoding implementation.
>
> I find out that the one at http://code.google.com/p/stringencoders is the best
> obtaining a 1.3 speedup vs the postgres one.
>
> Do you think is worth to submit a patch that replaces the postgres base64 implementation
> with this one?

(Note: the b64encode there reads 3 chars at a time, b64decode int32
at a time.)

There are 2 killer problems:

- decode does not seem to handle architectures that segfault
on unaligned int32 accesses.
- decode does not allow whitespace in input string.

If those are fixed it's question of if the 1.3x speed if worth more
complex code with big lookup tables.

If you want to optimize, it seems more worthwhile to add additional
loop to current code that reads 3 or 4 chars at a time, before the
current single-char loop. The decode loop may thus even optimize
to int32 fetching on x86/64 with reasonable compiler. Handling
whitespace with such code is doable, but will the code be clear enough?

--
marko

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] using hash index when BETWEEN is specified

Hannu Krosing napsal(a):
> On Wed, 2008-09-10 at 07:13 -0400, Robert Haas wrote:
>>>> I'm not planner guru but it seems to me that BETWEEN clause could be
>>>> rewritten as a IN clause for integer data types and small interval.
>>> Where should the line be drawn.
>>> Define small :)
>> When the estimated cost is lower?
>
> You still need to draw a line for when to even try estimating the cost .
>
> Will this be interval of 10 ? or 100 ? or 10000 ?

I think it depends of ration of unique integer number in a table and
numbers of requested interval, number distribution and total number of rows.

For example if you have 10 distinct number and each has 100 occurrence
then full scan is better (for between 1 and 5). But if each number
occurs 100000x. Then using hash index should be effective.

Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] plpgsql return select from multiple tables

Artis Caune escribió:
> Hi,
>
> What is the correct way of writing plpgsql function which needs return
> columns from multiple tables?
>
> e.x.:
> SELECT email FROM emails WHERE id = 1
> SELECT backend FROM backends WHERE id = 1
>
> I need plpgsql function return both email and backend in one line, like:
> SELECT email, backend FROM ...

Hmm, maybe

select email, backend from emails, backends where email.id = 1 and
backend.id = 1;
?

You don't need a plpgsql function for this ...

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] You need to rebuild PostgreSQL using --with-libxml.

2008/9/10 Ricardo Antonio Yepez Jimenez <RAYEPEZJ@seniat.gov.ve>:
>
>
> Hi,
>
> As I compile postgresql 8.3.2 to support sql / xml, Red hat 5.1 enterprise
> edition, I need to know the steps to comfigurarlo, if someone owns a manual.

nothing fancy; just install libxml2 (on Debian I needed libxml2-dev,
djust this to RH5.1),
and reconfigure postgres sources --with-libxml


http://www.postgresql.org/docs/8.3/static/install-procedure.html

--
Filip Rembiałkowski

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

Martijn van Oosterhout wrote:

> Vacuum can only clean up stuff older than the oldest open transaction.
> So if you have a transaction which is open for hours then stuff made
> since then it can't be vacuumed. The solution is: don't do that.

Actually it's worse than that: older than the oldest transaction that
was active at the time when the current oldest transaction created its
snapshot.

As for autocommit, my guess is that the driver is doing "COMMIT; BEGIN".
This should not cause much of a problem in 8.3 compared to previous
releases, because the transaction gets its Xid at the time the first
command write command is run (previously it was grabbed when the
transaction started). Also, I thought recent versions of the JDBC
driver did not issue the BEGIN right after COMMIT, so I'm surprised that
there's any visible difference at all.

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] plpgsql return select from multiple tables

Hi,

What is the correct way of writing plpgsql function which needs return
columns from multiple tables?

e.x.:
SELECT email FROM emails WHERE id = 1
SELECT backend FROM backends WHERE id = 1

I need plpgsql function return both email and backend in one line, like:
SELECT email, backend FROM ...


I do like this:

CREATE OR REPLACE FUNCTION get_user_data( INT )
RETURNS SETOF RECORD AS $$
DECLARE
v_email RECORD;
v_backend RECORD;
BEGIN
SELECT email
INTO v_email
FROM emails
WHERE id = $1;

SELECT backend
INTO v_backend
FROM backends
WHERE id = $1;

RETURN QUERY SELECT v_email AS email,
v_backend AS backend;
END;
$$ LANGUAGE 'plpgsql' SECURITY DEFINER;


and then doing selects:
SELECT * FROM get_user_data('${id}') AS (email VARCHAR, backend VARCHAR)


Is it okay, there will be a lot of those queries?


--
regards,
Artis Caune

<----. CCNA
<----|====================
<----' didii FreeBSD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[pgadmin-hackers] pgscript merged to svn trunk

Guys (Mickael in particular),

I've merged the pgscript patch up with the current svn snapshot, and
also removed the support for --disable-pgscript. I've also cleaned up a
few other minor things.

I'm not done with my review, but here's the updated snapshot so others
can check it. In particular, check that I didn't break anything that
worked before :-) Thanks!

Oh - sidenote. I haven't merged up the VC++ project files yet, working
on that now, but I figured I should get the code out there already...

//Magnus

Re: [ADMIN] [GENERAL] FW: How to upload data to postgres

Thanks Adrian.

I have read the Postgres 'copy' - the problem is that Postgres doesn't
understand Ingres format. This is I think where the failure comes from.
If I don't find a tool, I have to write scripts to convert data to
something postgres understand.

In the Ingres file with data for each varchar field, before the field is
the real size :

48070 820010601 820030210 41.890
-80.811 0.000 1U
3A16 819871030 0 47.471 -70.006
0.015 1R 0

In the example above:
3A16 - means for varchar(5) field there are only characters, i.e. A16
48070 - means for varchar(5) field there are only 4 characters, i.e.
8070
819871030 - 8 characters, i.e. 19871030

When I created the same table in Postgres, inserted some test data and
later copied it to a file, this is how it looks like:

A16 19871030 47.471 -70.006 0.015 R
KLNO 19801028 47.473 -70.006 0.016 R
MLNO 19801028 19990101 47.413 -70.006 0.016 R

Column | Type | Modifiers

-------------+------------------------+---------------------------------
-------
sta | character varying(5) | not null
ondate | character varying(8) | not null
offdate | character varying(8) | not null
lat | double precision | not null
lon | double precision | not null
elev | double precision | not null default 0
regist_code | character(1) | not null default ' '::bpchar


Nina

-----Original Message-----
From: Adrian Klaver [mailto:aklaver@comcast.net]
Sent: September 9, 2008 22:43
To: pgsql-general@postgresql.org
Cc: Markova, Nina
Subject: Re: [GENERAL] FW: How to upload data to postgres

On Tuesday 09 September 2008 1:54:12 pm Markova, Nina wrote:
> So far I tried;
>
> 1) I have copied data from Ingres in ASCII (using Ingres copydb
> command).
> 2) created a table in a Postgres database
> 3) tried loading data into Potgres table - encounter problems.
>
> For 1) (the Ingres part)
> =====================
> Ingres used the following copy commands:
>
> copy site(
> sta= varchar(0)tab,
> ondate= varchar(0)tab,
> offdate= varchar(0)tab,
> lat= c0tab,
> lon= c0tab,
> elev= c0tab,
> regist_code= varchar(0)tab,
> vault_cond= varchar(0)tab,
> geology= varchar(0)tab,
> comment= varchar(0)tab,
> initials= varchar(0)tab,
> lddate= c0nl,
> nl= d0nl)
> into '/tmp/site.dba'
>
> Normally Ingres will use this command to copy data from a file:
> copy site(
> sta= varchar(0)tab,
> ondate= varchar(0)tab,
> offdate= varchar(0)tab,
> lat= c0tab,
> lon= c0tab,
> elev= c0tab,
> regist_code= varchar(0)tab,
> vault_cond= varchar(0)tab,
> geology= varchar(0)tab,
> comment= varchar(0)tab,
> initials= varchar(0)tab,
> lddate= c0nl,
> nl= d0nl)
> from '/vm04-0/home/postgres/test/site.dba'
>
> For 3)
> =====
> - I got error when I tried to copy with Ingres-like copy command.
> - Then I tried to copy with simple 'copy site from
> '/vm04-0/home/postgres/test/site-c.dba' - ERROR: value too long for
> type character varying(5)

The ERROR explains it. The value you are bringing over from the Ingres
database is to long for a varchar(5) field.

Instead of rehashing the documentation I will point you to the relevant
section that pertains to Postgres COPY:
http://www.postgresql.org/docs/8.3/interactive/sql-copy.html

>
> - I had no luck either when used binary copying - postgres complained
> about signature:
> copy site from '/vm04-0/home/postgres/test/site.dba' with binary
>
> ERROR: COPY file signature not recognized
>
> ========================
> I have couple of questions as well.
> ========================
> Q1: is there an equivalent of copydb in postgres (in Ingres copydb
> creates copy statements for all database tables in a single file)

See pg_dump:
http://www.postgresql.org/docs/8.3/interactive/app-pgdump.html

> Q2: how to say in postgres that a field has no default values (in
> Ingres 'not default' is used - and this produced an error in postgres
> CREATE TABLE command)

The CREATE TABLE only takes a DEFAULT clause. If you want no default
don't specify anything:

lat float not null,

Since you specified NOT NULL you will have to specify some value on
INSERT.

>
> Create table site (
> sta varchar(5) not null,
> ondate varchar(8) not null,
> offdate varchar(8) not null,
> lat float not null not default, ----->
> lon float not null not default
> )
>
> Q3: How to specify storage structure of a table (again in Ingres
> 'modify' statement is used to specify btree, isam or hash structure).
> In the Postgres documentation I only saw how to create an index with
> a specific structure.

As far as I know this cannot be done in Postgres. The only way you can
modify the storage parameters is :

"Storage Parameters

The WITH clause can specify storage parameters for tables, and for
indexes associated with a UNIQUE or PRIMARY KEY constraint. Storage
parameters for indexes are documented in CREATE INDEX. The only storage
parameter currently available for tables is:

FILLFACTOR

The fillfactor for a table is a percentage between 10 and 100. 100
(complete packing) is the default. When a smaller fillfactor is
specified, INSERT operations pack table pages only to the indicated
percentage; the remaining space on each page is reserved for updating
rows on that page. This gives UPDATE a chance to place the updated copy
of a row on the same page as the original, which is more efficient than
placing it on a different page.
For a table whose entries are never updated, complete packing is the
best choice, but in heavily updated tables smaller fillfactors are
appropriate. "

This only applies to later versions of Postgres.

>
> In Ingres: modify site to isam unique on sta, ondate (means structure
> isam, primary key is on 2 fields - sta and ondate)
>
> Thanks in advance,
> Nina
>
> > ______________________________________________
> > From: Markova, Nina
> > Sent: September 9, 2008 14:32
> > To: pgsql-general@postgresql.org
> > Subject: How to upload data to postgres
> >
> > Hi again,
> >
> > I need to load data from Ingres database to Postgres database.
> > What's the easiest way?
> >
> > Thanks,
> > Nina

--
Adrian Klaver
aklaver@comcast.net

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] Synchronous Log Shipping Replication

Hi,

Fujii Masao wrote:
> On Tue, Sep 9, 2008 at 10:55 PM, Markus Wanner <markus@bluegap.ch> wrote:
>> Hi,
>>
>> ITAGAKI Takahiro wrote:
>>> Signals and locking, borrewed from Postgres-R, are now studied
>>> for the purpose in the log shipping,
>> Cool. Let me know if you have any questions WRT this imessages stuff.
>
> If you're sure it's all right, I have a trivial question.

Well, I know it works for me and I think it could work for you, too.
That's all I'm saying.

> Which signal should we use for the notification to the backend from
> WAL sender? The notable signals are already used.

I'm using SIGUSR1, see src/backend/storage/ipc/imsg.c from Postgres-R,
line 232. That isn't is use for backends or the postmaster, AFAIK.

> Or, since a backend don't need to wait on select() unlike WAL sender,
> ISTM that it's not so inconvenient to use a semaphore for that notification.

They probably could, but not the WAL sender.

What's the benefit of semaphores? It seems pretty ugly to set up a
semaphore, lock that on the WAL sender, then claim it on the backend to
wait for it, and then release it on the WAL sender to notify the backend.

If all you want to do is to signal the backend, why not use signals ;-)
But maybe I'm missing something?

Regards

Markus Wanner


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgadmin-hackers] SVN Commit by mha: r7463 - in trunk/pgadmin3: . i18n

Author: mha

Date: 2008-09-10 15:13:34 +0100 (Wed, 10 Sep 2008)

New Revision: 7463

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7463&view=rev

Log:
Some more setting of svn:ignore

Modified:
trunk/pgadmin3/
trunk/pgadmin3/i18n/

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [GENERAL] PostgreSQL TPC-H test result?

On Tue, Sep 09, 2008 at 05:42:50PM -0400, Greg Smith wrote:
>
> While some of the MonetDB bashing in this thread was unwarranted,

What bashing? I didn't see any bashing of them.

A
--
Andrew Sullivan
ajs@commandprompt.com
+1 503 667 4564 x104
http://www.commandprompt.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-advocacy] Binaries vs Source

On Wed, 10 Sep 2008 12:48:29 +0300
Peter Eisentraut <peter_e@gmx.net> wrote:

> Naz wrote:
> > Joshua Drake wrote:
> > > Oh.. actually I would find it very surprising if compile from
> > > source

>

And you break dependencies (from source).

Joshua D. Drake


--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate

--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy

Re: [pgadmin-support] Feature Request: query history

Yeah, that sounds good.  It would be great if it stored the history across sessions too so if you close the app you can still get to the previous queries later.

John

My half-patch had a menu, kind of like the history menu in firefox (or
whatever browser - I'm sure they all have it). But the same principle.

Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

On Wed, Sep 10, 2008 at 09:45:04AM -0400, Jack Orenstein wrote:
> Am I on the right track -- does autocommit = false for the BIG scan force
> versions of TINY to accumulate? I played around with a JDBC test program,
> and so far cannot see how the autocommit mode causes variations in what is
> seen by the scan. The behavior I've observed is consistent with the
> SERIALIZABLE isolation level, but 1) I thought the default was READ
> COMMITTED, and 2) why does the accumulation of row versions have anything
> to do with autocommit mode (as opposed to isolation level) on a connection
> used for the scan?

Vacuum can only clean up stuff older than the oldest open transaction.
So if you have a transaction which is open for hours then stuff made
since then it can't be vacuumed. The solution is: don't do that.

What I don't understand from your description is why your scan is slow
and how the autocommit relates to this. Postgresql only cares about
when you start and commit transactions, and I can't get from your
description when exactly that happens.

Rule of thumb: don't hold transaction open unnessarily long.

Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while
> boarding. Thank you for flying nlogn airlines.

Re: [HACKERS] Synchronous Log Shipping Replication

On Tue, Sep 9, 2008 at 10:55 PM, Markus Wanner <markus@bluegap.ch> wrote:
> Hi,
>
> ITAGAKI Takahiro wrote:
>>
>> Signals and locking, borrewed from Postgres-R, are now studied
>> for the purpose in the log shipping,
>
> Cool. Let me know if you have any questions WRT this imessages stuff.

If you're sure it's all right, I have a trivial question.

Which signal should we use for the notification to the backend from
WAL sender? The notable signals are already used.

Or, since a backend don't need to wait on select() unlike WAL sender,
ISTM that it's not so inconvenient to use a semaphore for that notification.

Your thought?

regards

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgadmin-hackers] SVN Commit by mha: r7462 - in trunk/pgadmin3/xtra: . wx-build

Author: mha

Date: 2008-09-10 15:01:11 +0100 (Wed, 10 Sep 2008)

New Revision: 7462

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7462&view=rev

Log:
Set svn:ignore properties

Modified:
trunk/pgadmin3/xtra/
trunk/pgadmin3/xtra/wx-build/

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

[SQL] FW: Help- post gress sql error

Hi!

I am receiving the following error when trying to execute sql

--------------------
Error: ERROR: ExecSubPlan: failed to find placeholder for subplan
result (State:HY000, Native Code: 2C)
---------------------

My query works fine in Oracle database without any issues.

I have sub queries with in the sql. If I replace sub queries with hard
coded values query works fine.
---------------------
select ft.a_id
,sum(nvl(ft.t_amt,0)) fee
from a_tran ft
,c_sum c1
,( select c.a_id,max(c.c_end_date)as end_date
from c_sum c
where c.d_status_id not in (7,10)
and c.c_end_date between '01-jun-2008' and '31-jul-2008'
group by c.a_id) prev
where ft.a_id=prev.a_id
and c1.a_id=prev.a_id
and c1.c_end_date=prev.end_date
and ft.p_date between '01-jul-2008' and '31-jul-2008'
and (( ft.t_code in ( select fld1_source_value from tran_code)
)
OR ( ft.t_code in ( select fld1_source_value from tran_code_2)
and ft.t_description not in (
select tran_dscr from tran_code_dscr)
)
)
and ft.p_date between c1.c_start_date and c1.c_end_date group by
ft.a_id;
----------

Please advice if there is any setting that can be made to avoid this
problem.

Thanks,
Prasoona

-----------------------------------------
====================================================
This message contains PRIVILEGED and CONFIDENTIAL
information that is intended only for use by the
named recipient. If you are not the named recipient,
any disclosure, dissemination, or action based on
the contents of this message is prohibited. In such
case please notify us and destroy and delete all
copies of this transmission. Thank you.
====================================================

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql