Sunday, July 20, 2008

[NOVICE] Character set conversion

Hi All,

I have a client application that uses an 8 bit character set that is not
supported by Postgresql. I'm using UTF-8 to store data within my
database and would like to create a character set conversion converting
between my native set and Postgresql. I have all the information I need
as far as which 8bit value should be mapped to what UTF-8 'character'.

I read in the documentation about the 'Create conversion' command
writing a function to do the conversion job. Is this the best way
forward or are there better ways to attempt this? Is there any sample
code available for implementing such a conversion? I don't want to
reinvent the wheel here...

--
Kindest Regards,

Bastiaan Olij
e-mail/MSN: bastiaan@basenlily.nl
web: http://www.basenlily.nl
Skype: Mux213
http://www.linkedin.com/in/bastiaanolij


--
Sent via pgsql-novice mailing list (pgsql-novice@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-novice

Re: [PATCHES] pg_dump additional options for performance

Simon Riggs <simon@2ndquadrant.com> writes:
> I also suggested having three options
> --want-pre-schema
> --want-data
> --want-post-schema
> so we could ask for any or all parts in the one dump. --data-only and
> --schema-only are negative options so don't allow this.
> (I don't like those names either, just thinking about capabilities)

Maybe invert the logic?

--omit-pre-data
--omit-data
--omit-post-data

Not wedded to these either, just tossing out an idea...

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PERFORM] log_statement at postgres.conf

Thx a lot Nicolas,

I finaly success to log query statement because of your simple explanation.
I have other question:
1. Is there posibility to automatically logging that statement to table?
2. All of that statement is come from every database on my server,
could I know from which database that statement come?
or at least I can filter to log only from database X ?
3. If I need to log only changed made on my database, then the value of
'log_statement' is 'mod' ?
CMIIW

Regards,
Joko [SYSTEM]
PT. Indra Jaya Swastika
Phone: +62 31 7481388 Ext 201
http://www.ijs.co.id

--sorry for my bad english

----- Original Message -----
From: "Pomarede Nicolas" <npomarede@corp.free.fr>
To: "System/IJS - Joko" <system@ijs.co.id>
Cc: <pgsql-performance@postgresql.org>
Sent: Friday, July 18, 2008 3:16 PM
Subject: Re: [PERFORM] log_statement at postgres.conf

> There're 2 points in your question :
>
> - what to log
> - where to log
>
> To choose 'what' to log in your case, you can change 'log_statement' to
> 'all'.
>
> Then, to choose 'where' to log, you can either use the proposal in the
> first answer, or change 'log_destination' to 'stderr' and
> 'redirect_stderr' to 'on'.
>
> Nicolas
>

--
If you have any problem with our services ,
please contact us at 70468146 or e-mail: helpdesk@ijs.co.id
PT Indra Jaya Swastika | Jl. Kalianak Barat 57A | +62-31-7481388

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [PATCHES] pg_dump additional options for performance

On Sun, 2008-07-20 at 23:34 -0400, Tom Lane wrote:
> Stephen Frost <sfrost@snowman.net> writes:
> > * daveg (daveg@sonic.net) wrote:
> >> One observation, indexes should be built right after the table data
> >> is loaded for each table, this way, the index build gets a hot cache
> >> for the table data instead of having to re-read it later as we do now.
>
> > That's not how pg_dump has traditionally worked, and the point of this
> > patch is to add options to easily segregate the main pieces of the
> > existing pg_dump output (main schema definition, data dump, key/index
> > building). You suggestion brings up an interesting point that should
> > pg_dump's traditional output structure change the "--schema-post-load"
> > set of objects wouldn't be as clear to newcomers since the load and the
> > indexes would be interleaved in the regular output.

Stephen: Agreed.

> Yeah. Also, that is pushing into an entirely different line of
> development, which is to enable multithreaded pg_restore. The patch
> at hand is necessarily incompatible with that type of operation, and
> wouldn't be used together with it.
>
> As far as the documentation/definition aspect goes, I think it should
> just say the parts are
> * stuff needed before you can load the data
> * the data
> * stuff needed after loading the data
> and not try to be any more specific than that. There are corner cases
> that will turn any simple breakdown into a lie, and I doubt that it's
> worth trying to explain them all. (Take a close look at the dependency
> loop breaking logic in pg_dump if you doubt this.)

Tom: Agreed.

> I hadn't realized that Simon was using "pre-schema" and "post-schema"
> to name the first and third parts. I'd agree that this is confusing
> nomenclature: it looks like it's trying to say that the data is the
> schema, and the schema is not! How about "pre-data and "post-data"?

OK by me. Any other takers?

I also suggested having three options
--want-pre-schema
--want-data
--want-post-schema
so we could ask for any or all parts in the one dump. --data-only and
--schema-only are negative options so don't allow this.
(I don't like those names either, just thinking about capabilities)

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PATCHES] pg_dump additional options for performance

On Sun, 2008-07-20 at 21:18 -0400, Stephen Frost wrote:
> * Simon Riggs (simon@2ndquadrant.com) wrote:
> > On Sun, 2008-07-20 at 17:43 -0400, Stephen Frost wrote:
> > > Even this doesn't cover everything though- it's too focused on tables
> > > and data loading. Where do functions go? What about types?
> >
> > Yes, it is focused on tables and data loading. What about
> > functions/types? No relevance here.
>
> I don't see how they're not relevant, it's not like they're being
> excluded and in fact they show up in the pre-load output. Heck, even if
> they *were* excluded, that should be made clear in the documentation
> (either be an explicit include list, or saying they're excluded).
>
> Part of what's driving this is making sure we have a plan for future
> objects and where they'll go. Perhaps it would be enough to just say
> "pre-load is everything in the schema, except things which are faster
> done in bulk (eg: indexes, keys)". I don't think it's right to say
> pre-load is "only object definitions required to load data" when it
> includes functions and ACLs though.
>
> Hopefully my suggestion and these comments will get us to a happy
> middle-ground.

I don't really understand what you're saying.

The options split the dump into 3 parts that's all: before the load, the
load and after the load.

--schema-pre-load says
"Dumps exactly what <option>--schema-only</> would dump, but only those
statements before the data load."

What is it you are suggesting? I'm unclear.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PATCHES] [HACKERS] Hint Bits and Write I/O

On Tue, Jul 1, 2008 at 4:13 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
>
>
> The first "half" is actually quite large, but that makes it even more
> sensible to commit this part now.
>
> The enclosed patch introduces the machinery by which we might later
> optimise hint bit setting. It differentiates between hint bit setting
> and block dirtying, when the distinction can safely be made. It acts
> safely during VACUUM and correctly during checkpoint. In all other
> respects it emulates current behaviour.
>

As you yourself said, this patch mostly gets the machinery to count
hint bits in place and leaves the actual optimization for future. But
I think we should try at least one or two possible optimizations and
get some numbers before we jump and make substantial changes to the
code. Also that would help us in testing the patch for correctness and
performance.

For example, the following hunk seems buggy to me:

Index: src/backend/storage/buffer/bufmgr.c
===================================================================
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/src/backend/storage/buffer/bufmgr.c,v
retrieving revision 1.232
diff -c -r1.232 bufmgr.c
*** src/backend/storage/buffer/bufmgr.c 12 Jun 2008 09:12:31 -0000 1.232
--- src/backend/storage/buffer/bufmgr.c 30 Jun 2008 22:17:20 -0000
***************
*** 1460,1473 ****

if (bufHdr->refcount == 0 && bufHdr->usage_count == 0)
result |= BUF_REUSABLE;
! else if (skip_recently_used)
{
/* Caller told us not to write recently-used buffers */
UnlockBufHdr(bufHdr);
return result;
}

! if (!(bufHdr->flags & BM_VALID) || !(bufHdr->flags & BM_DIRTY))
{
/* It's clean, so nothing to do */
UnlockBufHdr(bufHdr);
--- 1462,1477 ----

if (bufHdr->refcount == 0 && bufHdr->usage_count == 0)
result |= BUF_REUSABLE;
! else if (LRU_scan)
{
/* Caller told us not to write recently-used buffers */
UnlockBufHdr(bufHdr);
return result;
}

! if (!(bufHdr->flags & BM_VALID) ||
! !(bufHdr->flags & BM_DIRTY ||
! (LRU_scan && bufHdr->hint_count > 0)))
{
/* It's clean, so nothing to do */
UnlockBufHdr(bufHdr);


In the "if" condition above, we would throw away a buffer if the
hint_count is greater than zero, even if the buffer is dirty. This
doesn't seem correct to me, unless I am missing something obvious.


> The actual tuning patch can be discussed later, probably at length.
> Later patches will be fairly small in comparison and so various people
> can fairly easily come up with their own favoured modifications for
> testing.
>
>

I would suggest, let's have at least one tuning patch along with some
tests and numbers, before we go ahead and commit anything.

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] TODO item: Have psql show current values for a sequence

Bruce Momjian <bruce@momjian.us> writes:
> Wow. I adjusted the patch slightly and applied it; the updated version
> is attached. We have been waiting for this to be done for quite some
> time. Thanks.

Hmm ... I don't think that this patch actually addresses the TODO item.
The TODO item seems to have originated here
http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/TODO.diff?r1=1.1220;r2=1.1221;f=h
in response to this question on pgsql-novice:

> How can I list all the sequences in the database, with their
> attributes (such as last_value)? (I'm having a hard time guessing
> 'seq-name'; the 'A_id_seq' formula did not work.)
http://archives.postgresql.org/pgsql-novice/2004-02/msg00148.php

This applied-with-little-discussion patch only shows the sequence
values if you do a \d on a specific sequence, or \d on a wildcard
that happens to include some sequences (and probably a lot of other
stuff too, causing the resulting display to be far too long to be
useful).

My interpretation of the TODO item has always been that we should
improve \ds to include all the useful information in a format that
requires only one line per sequence. The reason it has remained
undone for four years is that that's hard given the existing catalog
representation of sequences and the constraints of describe.c's
implementation. (I recall at least one failed patch that tried to
do this, though I can't find it in the archives right now.)

I find the present patch to be pretty useless: it's not a material
advance over doing "select * from sequence-name". I think it should
be reverted and the TODO item reinstated --- perhaps with more detail
about what the item really is requesting.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] Conexiones simultaneas

2008/7/18 Edwin Quijada <listas_quijada@hotmail.com>:
>
> estoy usando Debian.
> Ayer mi sistema se cayo porque llego al limite maximo de conexiones,100 y tuve que de emergencia subirlo a 1000.
> Aun no he podido bajarlo porque el sistema esta en produccion y necesito hacer restart.
> Me preocupa que PostgreSQL reserve algo al declarar este limite tan alto de conexiones simulataneas.?
>

y por que no usas un pool de conexiones?

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 3: Si encontraste la respuesta a tu problema, publícala, otros te lo agradecerán

[ANNOUNCE] == PostgreSQL Weekly News - July 20 2008 ==

== PostgreSQL Weekly News - July 20 2008 ==

New Survey: How do you usually install or update PostgreSQL?
http://www.posgresql.org/community

Peter Eisentraut of the Core Team today took a PostgreSQL development
job at Sun Microsystems. At the same time, Josh Berkus left Sun, but
he hasn't said yet where he's going.

== PostgreSQL Product News ==

check_postgres 2.0.1 released.
http://bucardo.org/check_postgres/

Code Factory 8.7 for Windows released.
http://www.sqlmaestro.com/products/postgresql/codefactory/

Markus Wanner (ne Schiltknecht) has released a patch vs. CVS HEAD for Postgres-R.
http://www.postgres-r.org/downloads/

2ndQuandrant PostgreSQL Administration using Navicat training released.
http://www.2ndQuadrant.com/training.htm

== PostgreSQL Jobs for July ==

http://archives.postgresql.org/pgsql-jobs/2008-07/threads.php

== PostgreSQL Local ==

Sponsor the European PGDay!
http://www.pgday.org/en/sponsors/campaign

The Call for Papers for European PGDay has begun.
http://www.pgday.org/en/call4papers

TorontoPUG meeting will be on July 28.
http://pugs.postgresql.org/torontopug

pgDay San Francisco will be August 5. Schedule:
http://pugs.postgresql.org/node/447
Register here:
http://www.linuxworldexpo.com/live/12/ehall//SN460564

PGCon Brazil 2008 will be on September 26-27 at Unicamp in Campinas.
http://pgcon.postgresql.org.br/index.en.html

PGDay.(IT|EU) 2008 will be October 17 and 18 in Prato.
http://www.pgday.org/it/

== PostgreSQL in the News ==

Planet PostgreSQL: http://www.planetpostgresql.org/

General Bits, Archives and occasional new articles:
http://www.varlena.com/GeneralBits/

PostgreSQL Weekly News is brought to you this week by David Fetter
and Josh Berkus.

Submit news and announcements by Sunday at 3:00pm Pacific time.
Please send English language ones to david@fetter.org, German language
to pwn@pgug.de, Italian language to pwn@itpug.org.

== Applied Patches ==

Tom Lane committed:

- pgsql/src/include/storage/bufpage.h, clean up buildfarm failures
arising from the seemingly straightforward page macros patch :-(.
Results from both baiji and mastodon imply that MSVC fails to
perceive offsetof(PageHeaderData, pd_linp[0]) as a constant
expression in some contexts where offsetof(PageHeaderData, pd_linp)
works fine. Sloth, thy name is Macro.

- Support "variadic" functions, which can accept a variable number of
arguments so long as all the trailing arguments are of the same
(non-array) type. The function receives them as a single array
argument (which is why they have to all be the same type). It might
be useful to extend this facility to aggregates, but this patch
doesn't do that. This patch imposes a noticeable slowdown on
function lookup --- a follow-on patch will fix that by adding a
redundant column to pg_proc. Pavel Stehule

- Add a "provariadic" column to pg_proc to eliminate the remarkably
expensive need to deconstruct proargmodes for each pg_proc entry
inspected by FuncnameGetCandidates(). Fixes function lookup
performance regression caused by yesterday's variadic-functions
patch. In passing, make pg_proc.probin be NULL, rather than a dummy
value '-', in cases where it is not actually used for the particular
type of function. This should buy back some of the space cost of
the extra column.

- In pgsql/src/backend/commands/tablecmds.c, fix previous patch so
that it actually works --- consider TRUNCATE foo, public.foo.

- In pgsql/src/backend/nodes/outfuncs.c, add dump support for SortBy
nodes. Needed this while debugging a reported problem with
DISTINCT, so might as well commit it.

- Implement SQL-spec RETURNS TABLE syntax for functions. (Unlike the
original submission, this patch treats TABLE output parameters as
being entirely equivalent to OUT parameters -- tgl) Pavel Stehule.

- In pgsql/src/bin/psql/describe.c, suppress compiler warning, and not
incidentally make the code more robust. The previous coding was
quite risky because it was testing conditions different from 'is the
array really allocated?'.

- In pgsql/src/backend/storage/ipc/sinvaladt.c, fix a race condition
that I introduced into sinvaladt.c during the recent rewrite. When
called from SIInsertDataEntries, SICleanupQueue releases the write
lock if it has to issue a kill() to signal some laggard backend.
That still seems like a good idea --- but it's possible that by the
time we get the lock back, there are no longer enough free message
slots to satisfy SIInsertDataEntries' requirement. Must recheck,
and repeat the whole SICleanupQueue process if not. Noted while
reading code.

- Provide a function hook to let plug-ins get control around
ExecutorRun. ITAGAKI Takahiro

- Adjust things so that the query_string of a cached plan and the
sourceText of a portal are never NULL, but reliably provide the
source text of the query. It turns out that there was only one
place that was really taking a short-cut, which was the 'EXECUTE'
utility statement. That doesn't seem like a sufficiently critical
performance hotspot to justify not offering a guarantee of validity
of the portal source text. Fix it to copy the source text over from
the cached plan. Add Asserts in the places that set up cached plans
and portals to reject null source strings, and simplify a bunch of
places that formerly needed to guard against nulls. There may be a
few places that cons up statements for execution without having any
source text at all; I found one such in ConvertTriggerToFK(). It
seems sufficient to inject a phony source string in such a case, for
instance ProcessUtility((Node *) atstmt, "(generated ALTER TABLE ADD
FOREIGN KEY command)", NULL, false, None_Receiver, NULL); We should
take a second look at the usage of debug_query_string, particularly
the recently added current_query() SQL function. ITAGAKI Takahiro
and Tom Lane

- Avoid substituting NAMEDATALEN, FLOAT4PASSBYVAL, and FLOAT8PASSBYVAL
into the postgres.bki file during build, because we want that file
to be entirely platform- and configuration-independent; else it
can't safely be put into /usr/share on multiarch machines. We can
do the substitution during initdb, instead. FLOAT4PASSBYVAL and
FLOAT8PASSBYVAL are new breakage as of 8.4, while the NAMEDATALEN
hazard has been there all along but I guess no one tripped over it.
Noticed while trying to build "universal" OS X binaries.

- Add a pg_dump option --lock-wait-timeout to allow failing the dump
if unable to acquire shared table locks within a specified amount of
time. David Gould.

- Code review for array_fill patch: fix inadequate check for array
size overflow and bogus documentation (dimension arrays are int[]
not anyarray). Also the errhint() messages seem to be really
errdetail(), since there is nothing heuristic about them. Some
other trivial cosmetic improvements.

Bruce Momjian committed:

- Mark TODO as done, per Simon Riggs: "Fix server restart problem when
the server was shutdown during a PITR backup."

- Add URL for TODO: "Consider allowing control of upper/lower case
folding of unquoted identifiers."

- Mark TODO as done: "Add temporal versions of generate_series()."

- In psql, rename trans_* variables to translate_* for clarity.

- In pgsql/src/bin/psql/describe.c, add column storage type to psql
\d+ display. Gregory Stark.

- Add to TODO: "Improve ability to modify views via ALTER TABLE."

- In pgsql/src/bin/psql/describe.c, add comment about literal strings
in our syntax not being translated in psql.

- In pgsql/doc/src/sgml/charset.sgml, clarify that locale names on
Windows are more verbose. Report from Martin Saschek

- In pgsql/src/bin/psql/describe.c, have psql \d show the value of
sequence columns. Dickson S. Guedes.

- Mark TODO as done: "Have psql show current values for a sequence."

- Add TODO: "Consider decreasing the I/O caused by updating tuple hint
bits."

- In pgsql/src/bin/psql/describe.c, addendum: psql sequence value
display patch was originally written by Euler Taveira de Oliveira.

- Add to TODO: "Reduce PITR WAL file size by removing full page writes
and by removing trailing bytes to improve compression."

- In pgsql/doc/src/sgml/charset.sgml, add Swedish_Sweden.1252 Windows
locale example to docs.

- In pgsql/doc/src/sgml/func.sgml, fix alignment of SGML array docs.

- Add array_fill() to create arrays initialized with a value. Pavel
Stehule.

- Add to TODO: "Add external tool to auto-tune some postgresql.conf
parameters."

- In pgsql/src/backend/commands/tablecmds.c, allow TRUNCATE foo, foo
to succeed, per report from Nikhils.

- Add URL for TODO: "Implement SQL:2003 window functions."

- Add to TODO: "Reduce locking requirements for creating a trigger."

- Add URL for TODO: "Implement SQL:2003 window functions."

- In psql, run .psqlrc _after_ printing warnings and banner.

- Properly document archive/restore command examples on Windows.
ITAGAKI Takahiro

- In pgsql/src/bin/psql/startup.c, revert patch so .psqlrc can
suppress startup banner. Run .psqlrc _after_ printing warnings and
banner.

Alvaro Herrera committed:

- In pgsql/src/backend/postmaster/autovacuum.c, avoid crashing when a
table is deleted while we're on the process of checking it. Per
report from Tom Lane based on buildfarm evidence.

- Add MSVC++ debug libraries to .cvsignore.

== Rejected Patches (for now) ==

No one was disappointed this week :-)

== Pending Patches ==

ITAGAKI Takahiro sent in another revision of his patch executor_hook
for pg_stat_statements patch.

Xiao Meng sent in three revisions of his patch to improve the
performance of hash indexes.

David Wheeler sent in another revision of his case-insensitive text
patch.

Sushant Sinha sent in three patches to update the tsearch2
documentation and add regression testing for the case when cover size
is larger than MaxWords.

Simon Riggs sent in two revisions of a patch designed to report when
we're doing an anti-wraparound VACUUM.

Jan Urbanski sent in a WIP patch to create an oprrest function for
tsvector @@ tsquery and tsquery @@ tsvector.

Simon Riggs sent in another revision of his patch to add pg_dump
options --schema-pre-load and --schema-post-load.


---------------------------(end of broadcast)---------------------------
-To unsubscribe from this list, send an email to:

pgsql-announce-unsubscribe@postgresql.org

Re: [pgsql-es-ayuda] Urgente!!

El dom, 20-07-2008 a las 23:55 -0500, Carlos Alberto Cardenas Valdivia
escribió:
> Quisiera pedirles un gran gran favor...ya no me envien mail...sobre
> ningun tema...mi correo esta por bloquiarse debido a la gran cantidad
> de mail que recibo a diario..Espero me entiendan...Gracias por su
> comprension

Desuscribete tu mismo.
http://www.postgresql.org/mailpref/pgsql-es-ayuda

Saludos!
Roberto
--
visita mi weblog!
http://trasto.hopto.org/weblog
softwarelibre@diinf
http://softwarelibre.diinf.usach.cl

[pgsql-es-ayuda] Urgente!!

Quisiera pedirles un gran gran favor...ya no me envien mail...sobre ningun tema...mi correo esta por bloquiarse debido a la gran cantidad de mail que recibo a diario..Espero me entiendan...Gracias por su comprension

[COMMITTERS] pgsql: Code review for array_fill patch: fix inadequate check for array

Log Message:
-----------
Code review for array_fill patch: fix inadequate check for array size overflow
and bogus documentation (dimension arrays are int[] not anyarray). Also the
errhint() messages seem to be really errdetail(), since there is nothing
heuristic about them. Some other trivial cosmetic improvements.

Modified Files:
--------------
pgsql/doc/src/sgml:
func.sgml (r1.442 -> r1.443)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/func.sgml?r1=1.442&r2=1.443)
pgsql/src/backend/utils/adt:
arrayfuncs.c (r1.146 -> r1.147)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/utils/adt/arrayfuncs.c?r1=1.146&r2=1.147)
pgsql/src/test/regress/expected:
arrays.out (r1.37 -> r1.38)
(http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/test/regress/expected/arrays.out?r1=1.37&r2=1.38)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [PATCHES] pg_dump additional options for performance

Stephen Frost <sfrost@snowman.net> writes:
> * daveg (daveg@sonic.net) wrote:
>> One observation, indexes should be built right after the table data
>> is loaded for each table, this way, the index build gets a hot cache
>> for the table data instead of having to re-read it later as we do now.

> That's not how pg_dump has traditionally worked, and the point of this
> patch is to add options to easily segregate the main pieces of the
> existing pg_dump output (main schema definition, data dump, key/index
> building). You suggestion brings up an interesting point that should
> pg_dump's traditional output structure change the "--schema-post-load"
> set of objects wouldn't be as clear to newcomers since the load and the
> indexes would be interleaved in the regular output.

Yeah. Also, that is pushing into an entirely different line of
development, which is to enable multithreaded pg_restore. The patch
at hand is necessarily incompatible with that type of operation, and
wouldn't be used together with it.

As far as the documentation/definition aspect goes, I think it should
just say the parts are
* stuff needed before you can load the data
* the data
* stuff needed after loading the data
and not try to be any more specific than that. There are corner cases
that will turn any simple breakdown into a lie, and I doubt that it's
worth trying to explain them all. (Take a close look at the dependency
loop breaking logic in pg_dump if you doubt this.)

I hadn't realized that Simon was using "pre-schema" and "post-schema"
to name the first and third parts. I'd agree that this is confusing
nomenclature: it looks like it's trying to say that the data is the
schema, and the schema is not! How about "pre-data and "post-data"?

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [pgsql-es-ayuda] problema con dump de una bd

--- El lun 21-jul-08, Carlos Mendez <lucas1850@gmail.com> escribió:

> De: Carlos Mendez <lucas1850@gmail.com>
> Asunto: Re: [pgsql-es-ayuda] problema con dump de una bd
> A: hermeszambra@yahoo.com
> Cc: pgsql-es-ayuda@postgresql.org
> Fecha: lunes, 21 julio, 2008, 12:14 am
> Hola, gracias por las respuestas, esto se esta complicando
> porque necesito
> hacer un dump de la bd,
> el pg_dump esta en /usr/local/pgsql/bin/
> me pongo en esa carpeta y digito pg_dump y responde:
> bash: pg_dump command not found,
> y tambien genera un archivo pero con 0 Kb,
> pero hago un ls y el archivo esta ahi junto con otros.
> como hago para que funcione el pg_dump? solo necesito hacer
> volcar la bd a
> un archivo
> o como puedo utilizar otra herramienta o de otra manera o
> reemplazo el
> pg_dump con otro?
> como explicaba el phppgadmin funciona bien, crea bd,
> inserto, creo tablas,
> lo unico que no funciona es el pg_dump, podria ser que algo
> lo este
> bloqueando?
> podria ser que algo tuviera que ver el pgaccess? lo instale
> y no estoy
> seguro si despues de instalarlo empezaron los errores, el
> pgaccess tiene
> unos archivos de conexion que hay que borrarlos cada vez
> para que funcione
> correctamente asi era en windows, pero cuando inicio el
> pgaccess la consola
> marca un error, el pgaccess funciona bien pero en la
> consola aparece lo
> sgte:(la base que deseo volcar se llama "db" )
>
> [root@localhost ~]# pgaccess
>
> ERROR: There seems to be a problem with your connections
> file.
> A host/db combination should be unique and the db should
> not be empty string
> Check host/db: localhost/prueba with ids: 5 1
> Try removing the ~/.pgaccess/connections file
> Skipping this host/db combination
>
>
> ERROR: There seems to be a problem with your connections
> file.
> A host/db combination should be unique and the db should
> not be empty string
> Check host/db: localhost/colegio4 with ids: 6 14
> Try removing the ~/.pgaccess/connections file
> Skipping this host/db combination
>
> tengo que hacer un diagnostico del pg_dump para ver si esta
> bien pero no se
> como hacerlo,
> gracias de antemano por la ayuda,
> saludos.
>
>

Probaste conectarte con pgadmin III quiza por ahi puedas hacer algo


> >


____________________________________________________________________________________
Yahoo! MTV Blog & Rock &gt;¡Cuéntanos tu historia, inspira una canción y gánate un viaje a los Premios MTV! Participa aquí http://mtvla.yahoo.com/
--
TIP 8: explain analyze es tu amigo

Re: [PATCHES] pg_dump additional options for performance

* daveg (daveg@sonic.net) wrote:
> One observation, indexes should be built right after the table data
> is loaded for each table, this way, the index build gets a hot cache
> for the table data instead of having to re-read it later as we do now.

That's not how pg_dump has traditionally worked, and the point of this
patch is to add options to easily segregate the main pieces of the
existing pg_dump output (main schema definition, data dump, key/index
building). You suggestion brings up an interesting point that should
pg_dump's traditional output structure change the "--schema-post-load"
set of objects wouldn't be as clear to newcomers since the load and the
indexes would be interleaved in the regular output.

I'd be curious about the performance impact this has on an actual load
too. It would probably be more valuable on smaller loads where it would
have less of an impact anyway than on loads larger than the cache size.
Still, not an issue for this patch, imv.

Thanks,

Stephen

Re: [PATCHES] pg_dump additional options for performance

On Sun, Jul 20, 2008 at 09:18:29PM -0400, Stephen Frost wrote:
> * Simon Riggs (simon@2ndquadrant.com) wrote:
> > On Sun, 2008-07-20 at 17:43 -0400, Stephen Frost wrote:
> > > Even this doesn't cover everything though- it's too focused on tables
> > > and data loading. Where do functions go? What about types?
> >
> > Yes, it is focused on tables and data loading. What about
> > functions/types? No relevance here.
>
> I don't see how they're not relevant, it's not like they're being
> excluded and in fact they show up in the pre-load output. Heck, even if
> they *were* excluded, that should be made clear in the documentation
> (either be an explicit include list, or saying they're excluded).
>
> Part of what's driving this is making sure we have a plan for future
> objects and where they'll go. Perhaps it would be enough to just say
> "pre-load is everything in the schema, except things which are faster
> done in bulk (eg: indexes, keys)". I don't think it's right to say
> pre-load is "only object definitions required to load data" when it
> includes functions and ACLs though.
>
> Hopefully my suggestion and these comments will get us to a happy
> middle-ground.

One observation, indexes should be built right after the table data
is loaded for each table, this way, the index build gets a hot cache
for the table data instead of having to re-read it later as we do now.

-dg


--
David Gould daveg@sonic.net 510 536 1443 510 282 0869
If simplicity worked, the world would be overrun with insects.

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

[HACKERS] Any reason not to return row_count in cursor of plpgsql?

hi all,

I read the code that it seems easy for the cursor in plpgsql to return
ROW_COUNT after
MOVE LAST etc. The SPI_processed variable already there, but didn't put
it into estate
structure, any reason for that?

thanks and best regards

-laser

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] Hola OT

--- El dom 20-jul-08, Reynier Pérez Mira <rperezm@uci.cu> escribió:

> De: Reynier Pérez Mira <rperezm@uci.cu>
> Asunto: Re: [pgsql-es-ayuda] Hola
> A: hermeszambra@yahoo.com
> Cc: pgsql-es-ayuda@postgresql.org, "Lic. Moissane Hernández Campos" <mhdez@cav.desoft.cu>
> Fecha: domingo, 20 julio, 2008, 4:01 pm
> Hola a todos:
>
> > Te aconsejo que por estos temas te remitas a la
> UCI, creo que alguien
> > de desoft ya se esta vinculando. Me permito de forma
> un poco atrevida
> > sugerirte que te dirijas al compa Reyneer quien
> trabaja con PostgreSQL
> > en la UCI y con el cual hemos tenido buenos
> intercambios, digo por que
> > tus preguntas exceden el proposito de la lista y son
> algo extensas de
> > responder, cualquier sintesis seria escueta.
>
> Jejeje, Gabriel como siempre recomendándome ;). Mossaine
> puedes ponerte
> en contacto conmigo pero solo si te apuras pues estaré
> saliendo de
> vacaciones el próximo lunes y no retorno a la UCI hasta
> después del 25
> de agosto. No soy un experto en los temas que comentas pero
> tenemos
> buena documentación al respecto. Eso que preguntas son
> cosas básicas en
> el tema de BD y si estás comenzando, como imagino lo
> estés haciendo,
> entonces lo bueno sería primero leer libros donde te
> expliquen esos
> conceptos.
>

Ves esta es una razon por la que te recomiendo, estas bastante
adelantado y sabes donde recurrir a material alli en la gran Isla.

> Como comentan en la lista son cosas que no tienen que ver
> directamente
> con el gestor de BD PostgreSQL sino que son aplicables a
> cualquier SGBD
> existente hoy en día. Puedes apoyarte también en
> <<San Google>> donde
> aparece muy buena documentación e incluso hasta
> conferencias. Odio decir
> esto pero los de Micro$oft son muy buenos generando
> documentación y en
> el sitio de MSDN y Technet tienen buenas cosas al respecto,
> claro
> orientadas a SQL Server su SGBD pero igual, te sirve para
> aplicarlas en
> cualquier SGBD como comenté anteriomente.

Tanto es asi que mucha de esa aplicacion se basa en el standard y sus ejemplos funcionan sin o casi sin modificaciones en postgresql, cosa que no sucede con el SGDB de su propia firma a la que se orienta esa documentacion.

>
> Yo ahora es que estoy descubriendo el mundo de PostgreSQL
> (hasta hace
> unos meses trabajaba con MySQL) y créeme que para mi es
> un SGBD muy
> poderoso y sino se pasa al lado oscuro en algún momento
> y sigue con un
> desarrollo estable y mejorable puede llegar a competir con
> grandes como
> Oracle, SQL Server u otros.

Pues sin caer en fanatismos creo que ya supero a SQL Server en muchos aspectos incluso instalado en servidores Windows.
Y aquel benchmark que SUN hizo comparandolo a postgresql lo dejaba bien parado, por lo menos para mi demostro que en relacion costo/beneficio PostgreSQL es el mejor SGDB y tengo argumentos suficientes.

Sobre lo que digo de SQL Server, hasta su version 2003, ultima que toque lo digo con gran propiedad por que desde linea de comandos o desde su GUI trabaje muchisimo en ella.

Con respecto a la seguridad de que no se va a pasar al lado oscuro,
te digo que es poco factible por su licencia. la BSD asegura eso.
Mysql tenia propietario que compartia, postgresql tiene una comunidad donde todos y nadie es propietario, ya muchos proyectos tomaron el codigo fuente y crearon su propia version y siempre volvieron a la comunidad aportando mas experiencia y fortaleciendo aun mas.

En mi opinion postgresql es lo mejor que ha elaborado el SL

Aca un ranking de herramientas para un lan para alguien que desde windows quiera trabajar con SL. eligiendo linux como sistema operativo segun mi humilde opinion. El criterio empleado tiene que ver con lo facil de aprender e implementar y con la calidad.

1 PostgreSql
2 KDE
3 Gsmnas
4 Open Office
5 Samba
6 FireFOx
7 Apache y Send Mail
8 PHP

>
> @Gabriel: Un saludo para ti desde Cuba y nos vemos por
> aquí pues estoy
> comenzando algunos proyectos nuevos con PostgreSQL y me han
> surgido
> algunas dudillas que no logro resolver por mi solo.
> -

Como siempre sera un gusto colaborar en lo que sea que este a mi alcanze
Felices vacaciones.

Y un fraterno saludo desde Uruguay

> Salu2
> Ing. Reynier Pérez Mira
> Grupo de Soporte al Desarrollo - Dirección Técnica IP
>
> --
> TIP 10: no uses HTML en tu pregunta, seguro que quien
> responda no podrá leerlo


____________________________________________________________________________________
Yahoo! MTV Blog & Rock &gt;¡Cuéntanos tu historia, inspira una canción y gánate un viaje a los Premios MTV! Participa aquí http://mtvla.yahoo.com/
--
TIP 4: No hagas 'kill -9' a postmaster

Re: [NOVICE] Stopping a transaction as soon as an error occurs

On Thu, Jul 17, 2008 at 7:36 PM, John DeSoi <desoi@pgedit.com> wrote:
Put this at the stop of your file to make psql stop when an error occurs:

\set ON_ERROR_STOP 1


Thanks for this! This is basically what I need!


Ridvan

--
Bill Cosby  - "Advertising is the most fun you can have with your clothes on."

Re: [PATCHES] pg_dump additional options for performance

* Simon Riggs (simon@2ndquadrant.com) wrote:
> On Sun, 2008-07-20 at 17:43 -0400, Stephen Frost wrote:
> > Even this doesn't cover everything though- it's too focused on tables
> > and data loading. Where do functions go? What about types?
>
> Yes, it is focused on tables and data loading. What about
> functions/types? No relevance here.

I don't see how they're not relevant, it's not like they're being
excluded and in fact they show up in the pre-load output. Heck, even if
they *were* excluded, that should be made clear in the documentation
(either be an explicit include list, or saying they're excluded).

Part of what's driving this is making sure we have a plan for future
objects and where they'll go. Perhaps it would be enough to just say
"pre-load is everything in the schema, except things which are faster
done in bulk (eg: indexes, keys)". I don't think it's right to say
pre-load is "only object definitions required to load data" when it
includes functions and ACLs though.

Hopefully my suggestion and these comments will get us to a happy
middle-ground.

Thanks,

Stephen

Re: [HACKERS] [WIP] collation support revisited (phase 1)

I was trying to sort out the problem with not creating new catalog for character sets and I came up following ideas. Correct me if my ideas are wrong.

Since collation has to have a defined character set I'm suggesting to use already written infrastructure of encodings and to use list of encodings in chklocale.c. Currently databases are not created with specified character set but with specified encoding. I think instead of pointing a record in collation catalog to another record in character set catalog we might use only name (string) of the encoding.

So each collation will be set over these encodings set in chklocale.c. Each database will be able to use only collations that are created over same ("compatible") encodings regarding encoding_match_list. Each standard collation (SQL standard) will be defined over all possible encodings (hard-coded).

Comments?

Regards

     Radek Strnad

On Sat, Jul 12, 2008 at 5:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Zdenek Kotala <Zdenek.Kotala@Sun.COM> writes:
> I think if we support UTF8 encoding, than it make sense to create own charsets,
> because system locales could have defined collation for that.

Say what?  I cannot imagine a scenario in which a user-defined encoding
would be useful. The amount of infrastructure you need for a new
encoding is so large that providing management commands is just silly
--- anyone who can create the infrastructure can do the last little bit
for themselves.  The analogy to index access methods is on point, again.

                       regards, tom lane

Re: [HACKERS] [PATCHES] WITH RECUSIVE patches 0717

> On Mon, Jul 21, 2008 at 08:19:35AM +0900, Tatsuo Ishii wrote:
> > > > Thus I think we should avoid this kind of ORDER BY. Probably we should
> > > > avoid LIMIT/OFFSET and FOR UPDATE as well.
> > >
> > > What of index-optimized SELECT max(...) ?
> >
> > Aggregate functions in a recursive term is prohibited by the
> > standard. For example,
> >
> > WITH RECURSIVE x(n) AS (SELECT 1 UNION ALL SELECT max(n) FROM x)
> > SELECT * FROM x;
> >
> > produces an error.
>
> On the other side of UNION ALL, it's OK, right? For example,
>
> WITH RECURSIVE x(n) AS (
> SELECT max(i) FROM t
> UNION ALL
> SELECT n+1 FROM x WHERE n < 20
> )

Yes, aggregate functions in the non-recursive term is allowed by the
standard.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] [PATCHES] WITH RECUSIVE patches 0717

On Mon, Jul 21, 2008 at 08:19:35AM +0900, Tatsuo Ishii wrote:
> > > Thus I think we should avoid this kind of ORDER BY. Probably we should
> > > avoid LIMIT/OFFSET and FOR UPDATE as well.
> >
> > What of index-optimized SELECT max(...) ?
>
> Aggregate functions in a recursive term is prohibited by the
> standard. For example,
>
> WITH RECURSIVE x(n) AS (SELECT 1 UNION ALL SELECT max(n) FROM x)
> SELECT * FROM x;
>
> produces an error.

On the other side of UNION ALL, it's OK, right? For example,

WITH RECURSIVE x(n) AS (
SELECT max(i) FROM t
UNION ALL
SELECT n+1 FROM x WHERE n < 20
)

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] [PATCHES] WITH RECUSIVE patches 0717

> > Thus I think we should avoid this kind of ORDER BY. Probably we should
> > avoid LIMIT/OFFSET and FOR UPDATE as well.
>
> What of index-optimized SELECT max(...) ?

Aggregate functions in a recursive term is prohibited by the
standard. For example,

WITH RECURSIVE x(n) AS (SELECT 1 UNION ALL SELECT max(n) FROM x)
SELECT * FROM x;

produces an error.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [pgsql-www] Moderation of pgsql-cygwin?

On Sun, Jul 20, 2008 at 12:11:36PM -0400, Tom Lane wrote:
> David Fetter <david@fetter.org> writes:
> > On Sun, Jul 20, 2008 at 04:51:14PM +0300, Peter Eisentraut wrote:
> >> Is anyone moderating pgsql-cygwin? I am not seeing my
> >> held-for-moderation mail getting through. At least it doesn't
> >> show up on the archives web page.
>
> > More to the point, is there any reason not to shut down this list?
>
> Don't get ahead of yourself. I presume the mail Peter is
> complaining about is the notice he tried to send out to notify
> people that we are thinking of shutting down the list.

Great :)

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] [PATCHES] WITH RECUSIVE patches 0717

Tatsuo Ishii <ishii@postgresql.org> writes:
> Thus I think we should avoid this kind of ORDER BY. Probably we should
> avoid LIMIT/OFFSET and FOR UPDATE as well.

What of index-optimized SELECT max(...) ?

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [pgsql-www] Moderation of pgsql-cygwin?

David Fetter <david@fetter.org> writes:
> On Sun, Jul 20, 2008 at 04:51:14PM +0300, Peter Eisentraut wrote:
>> Is anyone moderating pgsql-cygwin? I am not seeing my held-for-moderation
>> mail getting through. At least it doesn't show up on the archives web page.

> More to the point, is there any reason not to shut down this list?

Don't get ahead of yourself. I presume the mail Peter is complaining
about is the notice he tried to send out to notify people that we are
thinking of shutting down the list.

regards, tom lane

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] temp table problem

"Heikki Linnakangas" <heikki@enterprisedb.com> writes:
> The underlying problem is that when we do GetOverrideSearchPath() in
> CreateCachedPlan, the memorized search path doesn't include pg_temp, if
> the temp namespace wasn't initialized for the backend yet. When we later
> need to revalidate the plan, pg_temp still isn't searched, even if it
> now exists.

So what's the problem? The cached plan couldn't have referred to a temp
table.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgus-board] Flyer text, v.1

Looks good to me!

On Sat, 2008-07-19 at 20:25 -0400, Michael Alan Brewer wrote:
> Hey, y'all; here's my first pass at putting together the PgUS flyer:
>
> [Is this a single sheet? Trifold?]
> ###############################
> The United States PostgreSQL Association welcomes you!
>
> ***GOALS***
> The US PostgreSQL Association (PgUS) is a non profit corporation with
> the following primary goals:
>
> (a) Educate, promote and support the creation, development and use of
> the PostgreSQL Open Source Database software, a software system which
> is available to the general public without charge;
>
> (b) Provide information and education regarding the use of PostgreSQL; and
>
> (c) Organize, hold and conduct meetings, discussion, and forums on the
> contemporary issues concerning the use of PostgreSQL.
>
> What do these mean for you?
>
> >>>.COM
> * Create sponsorship programs that utilize the power and influence of
> the for-profit market to continue the promotion of PostgreSQL by
> educating professional users and corporations on the benefits of using
> the database. Further the education of PostgreSQL through the use of development
> grants.
>
> >>>.EDU
> * Promote the use of PostgreSQL in academic curriculum, educational
> support applications, and in papers and presentations. From the server
> to the classroom, expand the presence of PostgreSQL.
>
> >>>.YOU
> * Support and offer PostgreSQL Conferences, Workshops and other
> educational events centered around PostgreSQL, such as the PostgreSQL Community
> conferences (EAST and WEST). Support User Groups in their quest to
> bring community together as a
> way to advance their own knowledge of PostgreSQL.
>
> ***MEMBERSHIP***
> There are many benefits to PgUS membership:
>
> 1. A postgresql.us email forward (username@postgresql.us)
> 2. Aggregation of member blog
> 3. Eligibility for grants
> 4. Eligibility to serve or lead committee
> 5. Ability to vote in elections
> 6. Ability to bring motions
> 7. Professional listing in member directory
> 8. A postgresql.us Jabber(R) account
>
> Membership dues shall be the following:
>
> $75 -- Professional
> $20 -- Student
>
> **********************
>
> To learn more about us and our mission, please visit us on the web at:
>
> postgresql.us
>
> #####################################################
> #####################################################
> This is 286 words (according to, umm, Word). The parts in ****CAPS
> **** and/or in >>>CAPS would require special formatting (larger font,
> drop letter, etc.); Selena, would this fit in your template?
>
> Please send suggestions/corrections/updates ASAP; I'd like to get
> this to the printer. ;)
>
> ---Michael Brewer
> mbrewer@gmail.com
>
--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


--
Sent via pgus-board mailing list (pgus-board@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgus-board

Re: [HACKERS] [PATCHES] WITH RECUSIVE patches 0717

> This crashes the backend:
>
> WITH RECURSIVE t(n) AS (
> VALUES (1)
> UNION ALL
> SELECT n+1 FROM t WHERE n < 5 ORDER BY 1
> )
> SELECT n FROM t;
>
> apparently because of the ORDER BY 1

Thanks for the report. I think ORDER BY in this case is useless
anyway. ORDER BY affects (VALUES (1) UNION ALL SELECT n+1 FROM t WHERE
n < 5). Since this is a recursive query, value for (VALUES (1) UNION
ALL SELECT n+1 FROM t WHERE n < 5) will not be determined until the
recursion stops. So the meaning of ORDER BY is vague. If caller wants
to get the sorted result of the recursion, he could always write:

WITH RECURSIVE t(n) AS (
VALUES (1)
UNION ALL
SELECT n+1 FROM t WHERE n < 5
)
SELECT n FROM t ORDER BY 1;

Thus I think we should avoid this kind of ORDER BY. Probably we should
avoid LIMIT/OFFSET and FOR UPDATE as well. Included patches add the
checking plus minor error messages clarifications. Also I include new
error cases sql.

> ( ORDER BY t.n will just error out )
>
> Compiled with:
>
> ./configure \
> --prefix=${install_dir} \
> --with-pgport=${pgport} \
> --quiet \
> --enable-depend \
> --enable-cassert \
> --enable-debug \
> --with-openssl
>
>
> hth
>
> Erik Rijkers
>
>
>
>
>

Re: [HACKERS] temp table problem

Tom Lane wrote:
> What PG version are you testing? Maybe you need to show a complete
> test case, instead of leaving us to guess at details?

I think that example is bogus. Let's forget that one, and look at the
attached script.

The underlying problem is that when we do GetOverrideSearchPath() in
CreateCachedPlan, the memorized search path doesn't include pg_temp, if
the temp namespace wasn't initialized for the backend yet. When we later
need to revalidate the plan, pg_temp still isn't searched, even if it
now exists.

(On 8.3 and CVS HEAD)

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

Re: [GENERAL] Writing a user defined function

Hello

2008/7/20 Suresh <suiyengar@yahoo.com>:
> Hello,
>
> Version is 8.1.3. Its an older version, in which I have some custom code.
> I want to test the code with a function which has a seq scan and a blocking
> loop.
>

first, scrollable cursors are supported from 8.3
second, you cannot declare cursor inside block - see on plpgsql documentation

http://www.postgresql.org/docs/8.3/interactive/plpgsql-structure.html

regards
Pavel Stehule

> Thanks,
> Suresh
>
> --- On Sun, 7/20/08, Pavel Stehule <pavel.stehule@gmail.com> wrote:
>
> From: Pavel Stehule <pavel.stehule@gmail.com>
> Subject: Re: [GENERAL] Writing a user defined function
> To: "Suresh_" <suiyengar@yahoo.com>
> Cc: pgsql-general@postgresql.org
> Date: Sunday, July 20, 2008, 1:33 AM
>
> Hello
>
> what is version of your postgresql?
>
> regards
> Pavel Stehule
>
> 2008/7/20 Suresh_ <suiyengar@yahoo.com>:
>>
>> I get this error
>>
>> ERROR: syntax error at or near "cursor"
>>
> CONTEXT: invalid type name "scroll cursor for select * from
> tpcd.customer"
>> compile of PL/pgSQL function "udf" near line 5
>>
>>
>> Douglas McNaught wrote:
>>>
>>> On Fri, Jul 18, 2008 at 12:07 PM, Suresh_ <suiyengar@yahoo.com>
> wrote:
>>>>
>>>> Hello,
>>>> I am trying to code a simple udf in postgres. How do I write sql
>>>> commands
>>>> into pl/sql ? The foll. code doesnt work.
>>>>
>>>> CREATE OR REPLACE FUNCTION udf()
>>>> RETURNS integer AS $$
>>>> BEGIN
>>>> for i in 1..2000 loop
>>>> for j in 1...10000 loop
>>>> end loop;
>>>> begin work;
>>>
>>> Postgres doesn't let you do transactions inside a function.
>>>
>>> Take out the BEGIN and COMMIT, and if you still get errors post the
>>> function code and the error
> message that you get.
>>>
>>> -Doug
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>>
>>
>> --
>> View this message in context:
> http://www.nabble.com/Writing-a-user-defined-function-tp18532591p18551845.html
>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-www] Moderation of pgsql-cygwin?

On Sun, Jul 20, 2008 at 04:51:14PM +0300, Peter Eisentraut wrote:
> Is anyone moderating pgsql-cygwin? I am not seeing my held-for-moderation
> mail getting through. At least it doesn't show up on the archives web page.

More to the point, is there any reason not to shut down this list?

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [GENERAL] Writing a user defined function

Hello,

Version is 8.1.3. Its an older version, in which I have some custom code.
I want to test the code with a function which has a seq scan and a blocking loop.

Thanks,
Suresh

--- On Sun, 7/20/08, Pavel Stehule <pavel.stehule@gmail.com> wrote:
From: Pavel Stehule <pavel.stehule@gmail.com>
Subject: Re: [GENERAL] Writing a user defined function
To: "Suresh_" <suiyengar@yahoo.com>
Cc: pgsql-general@postgresql.org
Date: Sunday, July 20, 2008, 1:33 AM

Hello

what is version of your postgresql?

regards
Pavel Stehule

2008/7/20 Suresh_ <suiyengar@yahoo.com>:
>
> I get this error
>
> ERROR: syntax error at or near "cursor"
> CONTEXT: invalid type name "scroll cursor for select * from
tpcd.customer"
> compile of PL/pgSQL function "udf" near line 5
>
>
> Douglas McNaught wrote:
>>
>> On Fri, Jul 18, 2008 at 12:07 PM, Suresh_ <suiyengar@yahoo.com>
wrote:
>>>
>>> Hello,
>>> I am trying to code a simple udf in postgres. How do I write sql
>>> commands
>>> into pl/sql ? The foll. code doesnt work.
>>>
>>> CREATE OR REPLACE FUNCTION udf()
>>> RETURNS integer AS $$
>>> BEGIN
>>> for i in 1..2000 loop
>>> for j in 1...10000 loop
>>> end loop;
>>> begin work;
>>
>> Postgres doesn't let you do transactions inside a function.
>>
>> Take out the BEGIN and COMMIT, and if you still get errors post the
>> function code and the error message that you get.
>>
>> -Doug
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>>
>
> --
> View this message in context:
http://www.nabble.com/Writing-a-user-defined-function-tp18532591p18551845.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Re: [SQL] PERSISTANT PREPARE (another point of view)

Richard Huxton wrote:

>> Milan Oparnica wrote:
>>
>> It's simply to complicated to return recordsets through
>>server-side stored procedures. They are obviously designed to do
>>complex data manipulation ...

> Richard wrote:
>I'm not convinced it's always a win one way or another.
>You still haven't said what's "too complicated" about defining a
>function:
>
>CREATE FUNCTION users_at_dotcom(text) RETURNS SETOF users AS $$
> SELECT * FROM users WHERE email LIKE '%@' || $1 || '.com';
>$$ LANGUAGE SQL;
> Richard Huxton
> Archonet Ltd
-------------------------------------------------------------------
Hi Richard,

It sounds like you suggest not using PREPARED statement nor stored
procedures to fetch data. What do you think is the best way ?

The example you posted is the only situation where it's simple to use
stored procedures to fetch data.

--------------------------------------------------------------------
Try to write following simple scenario:

a. Data is retrieved from two tables in INNER JOIN
b. I don't need all fields, but just some of them from both tables

Lets call tables Customers and Orders.

Definition of tables are:
Customers (CustomID INTEGER, Name TEXT(50), Adress TEXT(100))
Orders (OrderID INTEGER, CustomID INTEGER, OrderNum TEXT(10))

Now I need a list of order numbers for some customer:

SELECT C.CustomID, C.Name, O.OrderNum
FROM Customers C INNER JOIN Orders O ON C.CustomID=O.CustomID
WHERE C.Name LIKE <some input parameter>

Can you write this without defining an SETOF custom data type ?
----------------------------------------------------------------------
NOTE! THIS IS VERY SIMPLIFIED REPRESENTATION OF REAL-LIFE STRATEGY.
----------------------------------------------------------------------
We sometimes have JOINS up to 10 tables.

Besides, using report engines (like Crystal Reports) forces you to avoid
queries where column order of the recordset can change. If you built a
report on a query having CutomID,Name,OrderNum columns adding a column
(CustomID,Name,Adress,OrderNum) will require recompiling the report if
you want it to give correct results.

Thats one of the reasons we avoid SELECT * statements. Another is
because some user roles do not have permissions to examine table
structures. In such cases SELECT * returns error.

I hope I managed to present what I meant by "too complicated" when using
stored procedures to fetch data.

PREPARED statements do not suffer from such overhead. They simply return
records as if the statement was prepared in the client.

I will repeat, it took 5 minutes for prepared statement to return
results of the same SQL that took 16 minutes for the stored procedure to
do so. SP was written to return SETOF user type. If you want, I'll send
you the exact SQL and the database. Later we tested other queries and it
was always better performance using prepared statements then stored
procedures with SETOF user defined types.

Best regards,

Milan Oparnica

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [SQL] PERSISTANT PREPARE (another point of view)

Richard Huxton wrote:

>> Milan Oparnica wrote:
>>
>> It's simply to complicated to return recordsets through
>>server-side stored procedures. They are obviously designed to do
>>complex data manipulation ...

> Richard wrote:
>I'm not convinced it's always a win one way or another.
>You still haven't said what's "too complicated" about defining a
>function:
>
>CREATE FUNCTION users_at_dotcom(text) RETURNS SETOF users AS $$
> SELECT * FROM users WHERE email LIKE '%@' || $1 || '.com';
>$$ LANGUAGE SQL;
> Richard Huxton
> Archonet Ltd
-------------------------------------------------------------------
Hi Richard,

It sounds like you suggest not using PREPARED statement nor stored
procedures to fetch data. What do you think is the best way ?

The example you posted is the only situation where it's simple to use
stored procedures to fetch data.

--------------------------------------------------------------------
Try to write following simple scenario:

a. Data is retrieved from two tables in INNER JOIN
b. I don't need all fields, but just some of them from both tables

Lets call tables Customers and Orders.

Definition of tables are:
Customers (CustomID INTEGER, Name TEXT(50), Adress TEXT(100))
Orders (OrderID INTEGER, CustomID INTEGER, OrderNum TEXT(10))

Now I need a list of order numbers for some customer:

SELECT C.CustomID, C.Name, O.OrderNum
FROM Customers C INNER JOIN Orders O ON C.CustomID=O.CustomID
WHERE C.Name LIKE <some input parameter>

Can you write this without defining an SETOF custom data type ?
----------------------------------------------------------------------
NOTE! THIS IS VERY SIMPLIFIED REPRESENTATION OF REAL-LIFE STRATEGY.
----------------------------------------------------------------------
We sometimes have JOINS up to 10 tables.

Besides, using report engines (like Crystal Reports) forces you to avoid
queries where column order of the recordset can change. If you built a
report on a query having CutomID,Name,OrderNum columns adding a column
(CustomID,Name,Adress,OrderNum) will require recompiling the report if
you want it to give correct results.

Thats one of the reasons we avoid SELECT * statements. Another is
because some user roles do not have permissions to examine table
structures. In such cases SELECT * returns error.

I hope I managed to present what I meant by "too complicated" when using
stored procedures to fetch data.

PREPARED statements do not suffer from such overhead. They simply return
records as if the statement was prepared in the client.

I will repeat, it took 5 minutes for prepared statement to return
results of the same SQL that took 16 minutes for the stored procedure to
do so. SP was written to return SETOF user type. If you want, I'll send
you the exact SQL and the database. Later we tested other queries and it
was always better performance using prepared statements then stored
procedures with SETOF user defined types.

Best regards,

Milan Oparnica

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [ADMIN] Created non-owner user cannot see database

Niklas Johansson wrote:
>
> On 20 jul 2008, at 04.47, Daniel J. Summers wrote:
>> grant usage on database custom_database to user_no_2;
>>
>> Now, none of these commands failed - they all came back with "CREATE
>> ROLE" (or the appropriate response).
>
> Are you sure?
>
> 'GRANT USAGE ON DATABASE...' is invalid syntax. You probably want
> 'GRANT CONNECT ON DATABASE...'.
Ah - it had been a while since I originally set it up. I just tried
"GRANT CONNECT", and the problem still exists. Thanks for the
suggestion. :)

--
Daniel J. Summers
Owner, DJS Consulting
E-mail - daniel@djs-consulting.com <mailto:daniel@djs-consulting.com>
Website - http://www.djs-consulting.com <http://www.djs-consulting.com/>
Technology Blog - http://www.djs-consulting.com/linux/blog

GEEKCODE 3.12 GCS/IT d s-:+ a C++ L++ E--- W++ N++ o? K- w !O M--
V PS+ PE++ Y? !PGP t+ 5? X+ R* tv b+ DI++ D+ G- e h---- r+++ y++++

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [pgsql-es-ayuda] De informix a Postgresql

2008/7/17 Luis Fernando Lopez Aguilar <flopezg333@gmail.com>:
> ahora mi pregunta es con aubit4gl para conectarse a la base de
> datos se usa un browser???.

entonces esta deja de ser la lista adecuada


--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [pgsql-es-ayuda] De informix a Postgresql

2008/7/17 César Piñera García <cesar@gafi.com.mx>:
> en aubit te ofrecen una versión modificada de postgres totalmente compatible con el
> informix, pero se basa en la versión 7, te recomiendo que mejor uses una versión más
> nueva.
>

las ultimas versiones de aubit trabajan de forma nativa con pg8 no
necesitas version parchada...

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157
--
TIP 3: Si encontraste la respuesta a tu problema, publícala, otros te lo agradecerán

[pgsql-www] Moderation of pgsql-cygwin?

Is anyone moderating pgsql-cygwin? I am not seeing my held-for-moderation
mail getting through. At least it doesn't show up on the archives web page.

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] Getting to universal binaries for Darwin

Peter Eisentraut <peter_e@gmx.net> writes:
> For example, I'm a bit curious on the following aspect. This program should
> fail to compile on 32-bit platforms but succeed on 64-bit:

> #include <stddef.h>

> struct s { char a; long b; };

> int main(int argc, char *argv[])
> {
> int array[offsetof(struct s, b) - 5];

> return 0;
> }

> What happens if you run gcc -arch i386 -arch ppp64 on it? Does it require
> success on both output architectures?

Seems so. On a current MacBook Pro:

$ cat test.c
#include <stddef.h>

struct s { char a; long b; };

int main(int argc, char *argv[])
{
int array[offsetof(struct s, b) - 5];

return 0;
}
$ gcc -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
$ gcc -arch i386 -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
$ gcc -arch x86_64 -c test.c
$ gcc -arch ppc -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
$ gcc -arch ppc64 -c test.c
$ gcc -arch i386 -arch x86_64 -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
lipo: can't figure out the architecture type of: /var/folders/5M/5MGusdunEbWmuxTsRCYfbk+++TI/-Tmp-//ccfrarXl.out
$ gcc -arch i386 -arch ppc -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
lipo: can't figure out the architecture type of: /var/folders/5M/5MGusdunEbWmuxTsRCYfbk+++TI/-Tmp-//ccFqrJgr.out
$

This doesn't look amazingly well tested though: what I suspect is
happening is that it runs N instances of the compiler (note multiple
errors in the last case) and then tries to sew their output together
with lipo, whether they succeeded or not. I'll bet the "can't figure
out" is reflecting not being able to make sense of a zero-length .o
file ...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[JDBC] issue with select IN (?) query

Hi

I am facing an issue with following select query

        pm = con.prepareStatement("SELECT EMP_NAME FROM EMP where EMP_ID IN (?) ");
        pm.setString(1, "2,5,7");        //created many employees and id with 2, 5 and 7
        rs = pm.executeQuery(); // query is not returning any values

Is it a bug?

i was using postgresql-8.3-603.jdbc3.jar

==================================================
the code
===================================================

     private static void test()
    {    
      Connection con = null;
      PreparedStatement pm = null;
      ResultSet rs =  null;
      try
      {
        String driver = "org.postgresql.Driver";
        String dburl="jdbc:postgresql://localhost/test";
        Class.forName(driver);
        con = DriverManager.getConnection(dburl, "postgres", "postgres");         
      
        pm = con.prepareStatement("SELECT EMP_NAME FROM EMP where EMP_ID IN (?) ");
        pm.setString(1, "2,5,7");        //created many employees and id with 2, 5 and 7
        rs = pm.executeQuery(); // query is not returning any values
        if (rs != null && rs.next()) //not returning any values
        {
               System.out.println(rs.getString(1));
        }
        else
          System.out.println("Nothing...");
        }
      catch (Exception ex)
      {
         ex.printStackTrace();
      }
      finally
      {
            try {
                rs.close();
                pm.close();
                con.close();
            } catch (Exception ex) {
                 ex.printStackTrace();
            }
        }
    }


======================================================

Re: [SQL] PERSISTANT PREPARE (another point of view)

Pavel wrote:

>
> try to write prototype and show advantages...

Prototype of what, implementation into Postgre or just efficiency of
PRESISTANT PREPARE idea ?

> ...but I see some disadvatage
> too. Mainly you have to manage some shared memory space for stored
> plans. It's not easy task - MySQL develepoers can talk. Implemenation
> on postgresql is little bit dificult - lot of structures that lives in
> processed memory have to be moved to shared memory.
>

Is it solved in MySQL or they've just tried ?

We could have only PREP STATEMENT definition stored in shared memory
(probably something like stored procedures), and it could be run in
local processed memory. We could even assume only fetching data would be
used through PREP STATEMENTS for start, and later introduce data
modification. Is there some simplified PG algorithm we could use to
understand the amount of work needed for introducing such feature to PG?

> This feature is nice, but question is - who do write it?

With a little help form PG developers and good documentation perhaps I
could put some programmers from my team on this job. They are mostly C++
programmers but we have Delphi and Java if needed.

> Actually this problem is solved from outside - with pooling.
>

I'm very interested to learn more about this solution. Can you please
send me details or some links where I could research this solution ?


Thank you for your reply Pavel.

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [ADMIN] Created non-owner user cannot see database

On 20 jul 2008, at 04.47, Daniel J. Summers wrote:
> grant usage on database custom_database to user_no_2;
>
> Now, none of these commands failed - they all came back with
> "CREATE ROLE" (or the appropriate response).

Are you sure?

'GRANT USAGE ON DATABASE...' is invalid syntax. You probably want
'GRANT CONNECT ON DATABASE...'.


Sincerely,

Niklas Johansson


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[JDBC] numeric type

For numeric types (with no explicit scale and precision) JDBC returns 0 for both precision and scale (ResultSetMetaData.getPrecision and getScale methods). This is breaking my app and IMO does not reflect true state of things since Postgres docs state: „NUMERIC without any precision or scale creates a column in which numeric values of any precision and scale can be stored, up to the implementation limit on precision”.

 

Shouldn’t PG JDBC driver return maximum possible values for precision and scale in such cases?

 

Peter

 

[pgeu-general] Partners/Sponsors for European PGDay needed

Hi there,

as you know, the European PostgreSQL Day organisation is going on
smoothly.

As we have repeatedly said, our main goal is to do wide scale
advocacy for PostgreSQL, by keeping the event free and open to everyone.
It's not an easy task, as you may imagine.

However, in order to be able to make it better than last year's
Italian PGDay, we need the support from some of the European companies
that use PostgreSQL.

We have prepared an outstanding advertising campaign
(http://www.pgday.org/en/sponsors/campaign) and I encourage every
company that is sensitive to PostgreSQL to join it. There are several
options depending on budget availability, and there are some special and
limited options (such as the conference bags, or badges, ...).

Even if you intend to buy some merchandising stuff for PostgreSQL on
your own, I suggest that you let ITPUG and PG Europe do it for you, by
supporting PGDay. Indeed, we have the option to buy stuff for the "long
run" (not just PGDay, but even future events), we may save more money by
exploiting large scale economies and reinvest that money in advocacy.

Also, partnerships are tax deductable in Europe.

If you are interested or require more information, please write
privately to info@itpug.org.

Thanks,
Gabriele

--
Gabriele Bartolini: Open source programmer and data architect
Current Location: Prato, Tuscany, Italy
Associazione Italian PostgreSQL Users Group: www.itpug.org
gabriele.bartolini@gmail.com | www.gabrielebartolini.it
"If I had been born ugly, you would never have heard of Pelé", George Best
http://www.linkedin.com/in/gbartolini

--
Sent via pgeu-general mailing list (pgeu-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgeu-general

Re: [GENERAL] Writing a user defined function

Hello

what is version of your postgresql?

regards
Pavel Stehule

2008/7/20 Suresh_ <suiyengar@yahoo.com>:
>
> I get this error
>
> ERROR: syntax error at or near "cursor"
> CONTEXT: invalid type name "scroll cursor for select * from tpcd.customer"
> compile of PL/pgSQL function "udf" near line 5
>
>
> Douglas McNaught wrote:
>>
>> On Fri, Jul 18, 2008 at 12:07 PM, Suresh_ <suiyengar@yahoo.com> wrote:
>>>
>>> Hello,
>>> I am trying to code a simple udf in postgres. How do I write sql
>>> commands
>>> into pl/sql ? The foll. code doesnt work.
>>>
>>> CREATE OR REPLACE FUNCTION udf()
>>> RETURNS integer AS $$
>>> BEGIN
>>> for i in 1..2000 loop
>>> for j in 1...10000 loop
>>> end loop;
>>> begin work;
>>
>> Postgres doesn't let you do transactions inside a function.
>>
>> Take out the BEGIN and COMMIT, and if you still get errors post the
>> function code and the error message that you get.
>>
>> -Doug
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>>
>
> --
> View this message in context: http://www.nabble.com/Writing-a-user-defined-function-tp18532591p18551845.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Getting to universal binaries for Darwin

Am Sunday, 20. July 2008 schrieb Tom Lane:
> * This disables AC_TRY_RUN tests, of course.  The only adverse
> consequence I noticed was failure to recognize that
> -Wl,-dead_strip_dylibs is applicable, which is marginally annoying but
> hardly fatal.
>
> On the whole I still wouldn't trust cross-compiled configure results.
> Better to get your prototype pg_config.h from the real deal.

For example, I'm a bit curious on the following aspect. This program should
fail to compile on 32-bit platforms but succeed on 64-bit:

#include <stddef.h>

struct s { char a; long b; };

int main(int argc, char *argv[])
{
int array[offsetof(struct s, b) - 5];

return 0;
}

What happens if you run gcc -arch i386 -arch ppp64 on it? Does it require
success on both output architectures?

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers