Tuesday, September 2, 2008

Re: [HACKERS] rmgr hooks and contrib/rmgr_hook

Simon Riggs <simon@2ndQuadrant.com> writes:
> On Tue, 2008-09-02 at 18:30 +0900, ITAGAKI Takahiro wrote:
>> How about adding a new variable "recovery_preload_libaries" like as
>> shared_preload_libraries? Rmgr libs in it are loaded only in startup
>> process and only if recovery is needed.

> Good point. If others agree, I will re-implement this way.

Aside from the objections raised by Heikki, I'm not seeing the use-case
for an rmgr that only executes during recovery; in fact I'm not entirely
sure that I see a use-case for this entire patch. Where are the WAL
records that the "loadable rmgr" processes going to come from?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgadmin-hackers] SVN Commit by dpage: r7454 - in trunk/pgagent: . cmake

Author: dpage

Date: 2008-09-02 16:38:55 +0100 (Tue, 02 Sep 2008)

New Revision: 7454

Revision summary: http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/?rev=7454&view=rev

Log:
Add more robust (and feature rich) PG and WX detection.

Added:
trunk/pgagent/cmake/
trunk/pgagent/cmake/FindPG.cmake
trunk/pgagent/cmake/FindWX.cmake
Modified:
trunk/pgagent/CMakeLists.txt

--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers

Re: [GENERAL] MERGE: performance advices

On Tue, Sep 2, 2008 at 8:10 AM, Steve Clark <sclark@netwolves.com> wrote:

> Is there a way to do something similar with the following? I am an SQL noob
> and the
> following takes longer to run than is reasonable, on the order of hours.
>
> insert into myevents select * from t_unit_event_log a where exists
> (select b.event_log_no from myevents b
> where a.event_status = 1 and a.event_ref_log_no IS NOT NULL
> and a.event_ref_log_no = b.event_log_no and a.event_log_no not
> in
> (select event_log_no from myevents)
> )


To start off with, this SQL statement can be refined a bit. Many of
the sub-query WHERE clause constraints have nothing to do with the
Correlated sub-query. The refinement would look like so:

INSERT INTO Myevents
SELECT *
FROM T_unit_event_log AS A
WHERE A.event_status = 1
AND A.event_ref_log_no IS NOT NULL
AND A.event_log_no NOT IN ( SELECT event_log_no
FROM Myevents)
AND EXISTS ( SELECT B.event_log_no
FROM Myevents AS B
WHERE A.event_ref_log_no = B.event_log_no );


The next step would be to rework the NOT IN sub-query into a LEFT JOIN
WHERE IS NULL;

INSERT INTO Myevents
SELECT *
FROM T_unit_event_log AS A
LEFT JOIN Myevents AS C
ON A.event_log_no = C.event_log_no
WHERE A.event_status = 1
AND A.event_ref_log_no IS NOT NULL
AND C.event_log_no IS NULL
AND EXISTS ( SELECT B.event_log_no
FROM Myevents AS B
WHERE A.event_ref_log_no = B.event_log_no );

There is one possible alteration that may or many not improve
performance. This would be to replace the EXISTS with a LEFT JOIN
WHERE IS NOT NULL;

INSERT INTO Myevents
SELECT *
FROM T_unit_event_log AS A
LEFT JOIN Myevents AS C
ON A.event_log_no = C.event_log_no
LEFT JOIN Myevents AS B
ON A.event_ref_log_no = B.event_log_no
WHERE C.event_log_no IS NULL
AND B.event_log_no IS NOT NULL
AND A.event_status = 1
AND A.event_ref_log_no IS NOT NULL;


--
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [HACKERS] Question regarding the database page layout.

Tom Lane <tgl@sss.pgh.pa.us> writes:

> "Ryan Bradetich" <rbradetich@gmail.com> writes:
>> The patch concept is fairly simple.
>
>> 1. Add a new boolean local variable: require_max_align
>> (initialized to false).
>
> This really can't possibly work, because you'd need to propagate
> knowledge of the tuple's alignment requirement all over the place.
> In particular, how would code *reading* the tuple know where the
> data starts?

Uh, at t_hoff, no?

> Also, I don't think you get (very much of) the actual benefit unless the
> code that inserts tuples into disk pages knows to do something different in
> the int-align case.

+1

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] WIP Join Removal

Simon Riggs <simon@2ndQuadrant.com> writes:
> On Mon, 2008-09-01 at 22:23 +0300, Heikki Linnakangas wrote:
>> Did plan invalidation make it safe to rely on the presence of a unique
>> index for planning decisions?

> My understanding was "Yes" and this case was the specific reason I
> originally wanted to pursue plan invalidation back in 2006.

Yeah, it should work. The theory is that any schema change that could
affect planning should result in broadcasting a relcache inval message
for the table (not just the index, note). I'm pretty confident that
that works for index addition and removal (cf index_update_stats and
index_drop). There might be some situations where we need to force a
relcache inval but don't currently do so --- constraint addition/removal
for instance I'm not too sure about. But that would represent an easily
fixable bug.

regards, tom lane

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] Question regarding the database page layout.

"Ryan Bradetich" <rbradetich@gmail.com> writes:

> 4. If require_max_align = true, use the MAXALIGN macro; otherwise
> use the INTALIGN macro.

Huh, I didn't think of doing it like that.

But I'm confused. You seem to be tweaking the alignment of the data inside the
tuple? After the tuple header? I thought we had only one byte of wasted space
in there and that's used by the null bitmap. So unless you have more than 8
columns and some of them are null I wouldn't expect you to save any space. If
you do then I guess you could save 4 bytes if the null bitmap is 2-5 bytes
(mod 8) long.

I thought the goal was to save space by aligning the tuples on the page more
densely. That seems to me to be more fruitful as about half the tuples will
save four bytes even on tables with small or missing null bitmaps.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's 24x7 Postgres support!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] PostgreSQL Spanish Documentation Project

Moises:

Estoy viendo la documentacion aqui :

http://www.postgresql.org/docs/

despues:

http://www.postgresql.org/docs/manuals/

Dentro de esta seccion tenemos desde la version 8.3 hasta 7.4 estuve
viendo la version 8.3 app 251 paginas, creo que deberiamos partir por
definir que version de PG se traducira en un principio, ver la
cantidad de paginas que puede aportar cada uno (o por tematica tb
puede ser) y mientras se crea un marco de trabajo se puede comenzar a
traducir (si alguien tiene la curiosidad y el deseo incontrolable de
empezar :-) ), cumpliendo con la salvad de avisar (Hey.. yo trabajare
en el manual <version> desde <pagina> hasta <pagina> !!! o en su
defecto Hey .. yo trabajare en los temas<temas> del manual de la
version<version> !!! )

Leí tu documento esta relativamente claro (digo relativamente porque
existen algunos conceptos que no manejo, pero me estoy poniendo al
dia) , y ordenado, por mi parte soy utilizador win2 por lo tanto
partire por informarme de docbook y ver de que se trata.

Como decia en un principio creo que lo mas importante es :

1.- Traduccion de calidad en lo referente a modismos u otros.
2.- Que no exista mas de 1 persona trabajando sobre el mismo texto, ya
que, para eso existira un coordinador / revisor, que llevara el
control de ello.

No se que opinas o me estoy disparando un poco...

Bueno eso por ahora sigamos en contacto.

Slds.
Jch

2008/9/2 Moises Galan <pgsqlcol@gmail.com>:
> Ok, Javier por ahora veamos como reacciona la lista y en la medida que
> surgan propuestas vamos tomando tareas puntuales y luego procederemos a
> establecer responsables.
>
>
>

--
----------------------
Slds.
jchavez
linux User #397972 on http://counter.li.org/
--
TIP 8: explain analyze es tu amigo

RE: [pgsql-es-ayuda] default current_time

Gracias por responder Jaime.

Mira estuve el pasado domingo intentando eso mismo q dices timezone = utc-4 y tambien timezone=utc+4, en el ultimo de los casos el servicio no quizo subir, aunque no he probado poner GMT+4 a ver q me dice, no entiendo si eso pasa solo en Windows o tambien pasa en Sistemas operativos como Linux. Otra cosa q no entiendo es por que razón cuando se pone timezone = utc-4 pone una hora mas adelantada al horario GMT cuando lo q deberia hacer es poner una hora menor al GMT.



<html><div>ing. José Fermín Francisco Ferreras <BR>San Francisco de Macorís, Rep. Dom. <BR></div></html>


> Date: Mon, 1 Sep 2008 22:14:36 -0500
> From: jcasanov@systemguards.com.ec
> To: josefermin54@hotmail.com
> Subject: Re: [pgsql-es-ayuda] default current_time
> CC: alvherre@alvh.no-ip.org; pgsql-es-ayuda@postgresql.org
>
> 2008/9/1 José Fermín Francisco Ferreras <josefermin54@hotmail.com>:
> >
> > En el Archivo postgresql.conf en el parametro timezone = unknown lo cambie
> > por timezone = utc-4 y lo me hace es cambiarme la hora a otro horario mas
> > adelantado.
> >
>
> prueba con UTC+4... segun tengo entendido en mi pais, Ecuador, estamos
> en GMT-5 pero al poner eso en el timezone del postgresql.con me subio
> 5 horas. Tuve que ponerle GMT+5 para que me ponga la hora correcta. No
> se si es que tengo mal configurado mi sistema Ubuntu pero algo similar
> me paso en Windows.
>
> --
> Atentamente,
> Jaime Casanova
> Soporte y capacitación de PostgreSQL
> Asesoría y desarrollo de sistemas
> Guayaquil - Ecuador
> Cel. (593) 87171157
> --
> TIP 5: ¿Has leído nuestro extenso FAQ?
> http://www.postgresql.org/docs/faqs.FAQ.html


Got Game? Win Prizes in the Windows Live Hotmail Mobile Summer Games Trivia Contest Find out how.

[GENERAL] pg_catalog forward compatibility

Is there a better way to query a database definition than select from
pg_catalog tables and views? For example, when I put out a new
software update, I need to verify that all the table, column,
constraint, etc definitions are correct for the update.

Thanks,
Bob

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[COMMITTERS] stackbuilder - wizard: Capitalise CPack correctly

Log Message:
-----------
Capitalise CPack correctly

Modified Files:
--------------
wizard:
CMakeLists.txt (r1.20 -> r1.21)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/CMakeLists.txt.diff?r1=1.20&r2=1.21)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] [PATCH] Cleanup of GUC units code

"Marko Kreen" <markokr@gmail.com> writes:

> Uh. So you want force "proper" units in presentations at the price
> of everyday admin operations? Does not seem like a sensible tradeoff.

It didn't to anyone else when Peter wrote the current version either, but as
the person willing to actually do the work and write the code Peter got to
make the decision. Nobody else stepped up to do the work to change it and we
can't exactly force Peter to do work he doesn't agree with. The current
version is unquestionably better than what came before where you had to
specify some things in units of Postgres blocks, for example.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] PostgreSQL Spanish Documentation Project

Moises Galan escribió:


> Tambien seria bueno conocer como otras comunidades de usuarios de postgresql
> realizaron la traduccion de la documentacion del ingles a su idioma local.

Alemán: Peter Eisentraut estuvo tres meses encadenado a su silla hasta
que la terminó. Metodología: tomó los SGML y tradujo los textos
directamente.

Francés: similar, pero en vez de ser una sola persona, fueron varias.

Japonés: desconozco detalles pero creo que fue similar al francés.

> ¿Podriamos implementar alguna tecnologia que utilizaron para hacer nosotros
> lo propio?. ¿quien puede averiguar esto?.

Podemos usar xml2pot.


> Tambien sera necesario crear una lista para el proyecto de traduccion de la
> documentacion al español, ¿quien puede encargarse de eso?

No es necesario, ya existen unas listas en pgfoundry,
webtranslator-general@pgfoundry.org

> Sera largo y penoso el proceso de traducción, porque no contamos con la
> aplicacion de gestión en linea para la traducción, como en algún momento
> existió , creo, que ahora lo que necesitamos es sugerir pautas claras para
> comenzar este trabajo, pero si no tomamos una acción concreta, nunca se hará
> la traducción.

Déjame llamar a Mario para preguntarle cómo podemos volver a levantar la
plataforma.

--
Alvaro Herrera Developer, http://www.PostgreSQL.org/
"La experiencia nos dice que el hombre peló millones de veces las patatas,
pero era forzoso admitir la posibilidad de que en un caso entre millones,
las patatas pelarían al hombre" (Ijon Tichy)
--
TIP 3: Si encontraste la respuesta a tu problema, publícala, otros te lo agradecerán

Re: [GENERAL] MERGE: performance advices

Richard Broersma wrote:
> On Tue, Sep 2, 2008 at 4:19 AM, Ivan Sergio Borgonovo
> <mail@webthatworks.it> wrote:
>
>
>
>>insert into d (pk, c1, c2, ...) select pk, c1, c2, c3 from s
>> where s.pk not in (select pk from d);
>
>
> This insert statement might be faster:
>
> INSERT INTO d (pk, c1, c2, ... )
> SELECT pk, c1, c2, ...
> FROM s
> LEFT JOIN d ON s.pk = d.pk
> WHERE d.pk IS NULL;
>
>
Hello Richard,

Is there a way to do something similar with the following? I am an SQL noob and the
following takes longer to run than is reasonable, on the order of hours.

insert into myevents select * from t_unit_event_log a where exists
(select b.event_log_no from myevents b
where a.event_status = 1 and a.event_ref_log_no IS NOT NULL
and a.event_ref_log_no = b.event_log_no and a.event_log_no not in
(select event_log_no from myevents)
)

Thanks,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] PostgreSQL Spanish Documentation Project

Ok, Javier por ahora veamos como reacciona la lista y en la medida que surgan propuestas vamos tomando tareas puntuales y luego procederemos a establecer responsables.

 

Re: [HACKERS] Question regarding the database page layout.

"Ryan Bradetich" <rbradetich@gmail.com> writes:
> The patch concept is fairly simple.

> 1. Add a new boolean local variable: require_max_align
> (initialized to false).

This really can't possibly work, because you'd need to propagate
knowledge of the tuple's alignment requirement all over the place.
In particular, how would code *reading* the tuple know where the
data starts? Also, I don't think you get (very much of) the actual
benefit unless the code that inserts tuples into disk pages knows
to do something different in the int-align case.

It's conceivable that we could make this work if we wanted to dedicate
an infomask bit to showing whether the tuple needs int or double
alignment. I don't really think it's worth the trouble though.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-fr-generale] plpgsql: Des paramètres facultatifs ?

Le mardi 02 septembre 2008, Samuel ROZE a écrit :
> Est-ce possible de faire explicitement des paramètres facultatifs ?

C'est en cours de proposition par Pavel Stehule (lire les archives
sur -hackers) pour la 8.4, mais il semble difficile de bien faire cohabiter
les labels de paramètres et les valeurs par défaut. Affaire à suivre.
http://archives.postgresql.org/pgsql-hackers/2008-08/msg01149.php

Cordialement,
--
dim

Re: [HACKERS] Is this really really as designed or defined in some standard

2008/9/2 Tom Lane <tgl@sss.pgh.pa.us>:
> "Pavel Stehule" <pavel.stehule@gmail.com> writes:
>> 2008/9/2 Tom Lane <tgl@sss.pgh.pa.us>:
>>> BTW, there are actually two separate issues here: input parameters and
>>> output parameters. After brief thought it seems like we should enforce
>>> uniqueness of non-omitted parameter names for IN parameters (including
>>> INOUT), and separately enforce uniqueness of non-omitted parameter names
>>> for OUT parameters (including INOUT).
>
>> It's well thought, but I afraid so this can hide some bug, and it's
>> little bit dangerous.
>
>> I thing, so we can simply duplicate values in result then allowing
>> duplicate params in function.
>
> Um ... what? I'm not sure what behavior you're proposing here.
>
> regards, tom lane
>

I am sorry - I really have to learn english. Simply I don't thing, so
duplicit OUT parameters is good idea, but I am haven't strong
objections - some programmer's bugs are visible in this case.

regards
Pavel

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [BUGS] BUG #4393: failed toget system metics for terminal services:87

Russell Smith <mr-russ@pws.com.au> writes:
> Hemavathi B N wrote:
>> PostgreSQL version: 8.0
>> Operating system: windows 2003
>> Description: failed toget system metics for terminal services:87

> My guess is that you are not installing from the console, but are using
> terminal services. I don't think you can install from a terminal
> services connection. Try installing directly from the console.

Also try a newer Postgres release, per the identical bug report posted
less than a week ago:
http://archives.postgresql.org/pgsql-bugs/2008-08/msg00241.php
I believe this is something that's fixed as of 8.2.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

[COMMITTERS] stackbuilder - wizard: Use CXXFLAGS not CPPFLAGS

Log Message:
-----------
Use CXXFLAGS not CPPFLAGS

Modified Files:
--------------
wizard/cmake:
FindWX.cmake (r1.3 -> r1.4)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/cmake/FindWX.cmake.diff?r1=1.3&r2=1.4)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Is this really really as designed or defined in some standard

"Pavel Stehule" <pavel.stehule@gmail.com> writes:
> 2008/9/2 Tom Lane <tgl@sss.pgh.pa.us>:
>> BTW, there are actually two separate issues here: input parameters and
>> output parameters. After brief thought it seems like we should enforce
>> uniqueness of non-omitted parameter names for IN parameters (including
>> INOUT), and separately enforce uniqueness of non-omitted parameter names
>> for OUT parameters (including INOUT).

> It's well thought, but I afraid so this can hide some bug, and it's
> little bit dangerous.

> I thing, so we can simply duplicate values in result then allowing
> duplicate params in function.

Um ... what? I'm not sure what behavior you're proposing here.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [pgsql-es-ayuda] PostgreSQL Spanish Documentation Project

2008/9/2 Moises Galan <pgsqlcol@gmail.com>:
> Saludos a todos los miembros de la lista.
>
> Efectivamente lo recomendable es participar de un grupo de traduccion, como
> lo menciona Roberto Andrade Fonseca en su post, el problema es que no existe
> actualmente uno, entonces el que quiera ir sumandose puede hacerlo, y asi
> conformaremos el grupo, lamentablemente no contamos con una plataforma como
> la que existia en http://l10n.postgresql.cl/es/.
>
> Por lo que todo aquel que quiera participar de la traducción sugiero haga su
> aporte en cvs y envie su trabajo al repositorio central,mientras vamos
> adelantando e implementando un equipo de trabajo, llegado el caso que exista
> un programador y haga funcionar de nuevo el framework podriamos estar
> trasladando el proyecto al framework, por ahora sugiero que trabajemos con
> el cvs, o si existe alguna alternativa en este sentido, tomar una decision
> ahora.
>
> Tambien seria bueno conocer como otras comunidades de usuarios de postgresql
> realizaron la traduccion de la documentacion del ingles a su idioma local.
>
> ¿Podriamos implementar alguna tecnologia que utilizaron para hacer nosotros
> lo propio?. ¿quien puede averiguar esto?.
>
> Claro en la medida que existan interesados en participar de la traduccion
> sera necesario inmplementar algunas pautas generales, para que exista una
> coordinacion minima de la traduccion desde el principio. Por el momento
> sugiero crear el grupo de trabajo que he denominado, Proyecto de Traduccion
> de la Documentacion de Postgresql al Español, o en su forma mas abreviada
> por sus siglas en ingles (PSDP-es) por lo de PostgreSQL Spanish
> Documentation Project. ver documentos adjunto.
>
> http://postgresql.org.pe/sites/default/files/introduccion%20psdp-es.pdf
>
> http://postgresql.org.pe/sites/default/files/propuesta%20PSDP-es.pdf
>
> Tambien sera necesario crear una lista para el proyecto de traduccion de la
> documentacion al español, ¿quien puede encargarse de eso?, no sé, si tambien
> un espacio propio en la pagina web de postgresql, no se, si esto es posible
> o como lo maneje el equipo de desarrollo de la pagina web. Ignoro como
> podemos incluir el texto del PSDP-es, en la pagina ¿quien puede hacer y
> averiguar esto?.
>
> Sera largo y penoso el proceso de traducción, porque no contamos con la
> aplicacion de gestión en linea para la traducción, como en algún momento
> existió , creo, que ahora lo que necesitamos es sugerir pautas claras para
> comenzar este trabajo, pero si no tomamos una acción concreta, nunca se hará
> la traducción.
>
> Por ahora este es el resumen de las tareas preliminares que creo se podrían
> realizarse para comenzar en firme con el proyecto de traducción:
>
> 1. Puesta en marcha del framework o en su defecto incorporar una herramienta
> que otra comunidad de postgresql haya utilizado para traducir la
> documentacion.
>
> 2.En caso de no poder utilizar un framework, definición de un gestor de
> tradición como alternativa a nivel local para cada uno de los traductores.
>
> Consecución de un numero mínimo de traductores.
>
> Establecimiento de un revisor.
>
> Creación de una lista de correo para el proyecto de documentacion de
> postgresql al español y un espacio en la pagina web para alojar al projecto
> de traducción ( y la traducción de la pagina a su vez).
>
> Definición de receptores de traducciones en en texto plano (que serán
> llevadas luego a formato)
>
> Incorporación de sugerencias sobre el texto que organiza el PSDP-es.
>
> Espero que con este esbozo podamos hacer algo y comenzar realmente.
>
>
>
>
>
>
>
>

Estimado Moises:

Por mi parte puedo aportar con traducciones una vez que alguien se
haga cargo de la coordinacion del equipo seria interesante que asigne
tareas (por mi parte no puedo asumir una responsabilidad, que a la
larga por mi carga de trabajo pueda que no cumpla como corresponde).

Por lo tanto una vez definidas tareas, responsables , etc . puedo
aportar traduciendo, creo que eres quien tiene mas claridad al
respecto por lo tanto mantenemos contacto.

Slds a tod2.

Jch

--
----------------------
Slds.
jchavez
linux User #397972 on http://counter.li.org/
--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [HACKERS] What is d2mdir?

Abhijit Menon-Sen <ams@oryx.com> writes:
> At 2008-09-02 15:10:23 +0300, devrim@gunduz.org wrote:
>>
>> [devrim@laptop sgml]$ make man

> As Alvaro noted recently, you need to use "make man D2MDIR=/some/path".

I see it's been like that for quite some time, but still it seems pretty
bogus. Why isn't configure handling this? If there's some good reason
not to automate it, why isn't it documented in the "Building The
Documentation" appendix?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] PostgreSQL Spanish Documentation Project

Saludos a todos los miembros de la lista.

Efectivamente lo recomendable es participar de un grupo de traduccion, como lo menciona Roberto Andrade Fonseca en su post, el problema es que no existe actualmente uno, entonces el que quiera ir sumandose puede hacerlo, y asi conformaremos el grupo, lamentablemente no contamos con una plataforma como la que existia en http://l10n.postgresql.cl/es/.

Por lo que todo aquel que quiera participar de la traducción sugiero haga su aporte en cvs y envie su trabajo al repositorio central,mientras vamos adelantando e implementando un equipo de trabajo, llegado el caso que exista un programador y haga funcionar de nuevo el framework podriamos estar trasladando el proyecto al framework, por ahora sugiero que trabajemos con el cvs, o si existe alguna alternativa en este sentido, tomar una decision ahora.

Tambien seria bueno conocer como otras comunidades de usuarios de postgresql realizaron la traduccion de la documentacion del ingles a su idioma local.

¿Podriamos implementar alguna tecnologia que utilizaron para hacer nosotros lo propio?. ¿quien puede averiguar esto?.

Claro en la medida que existan interesados en participar de la traduccion sera necesario inmplementar algunas pautas generales, para que exista una coordinacion minima de la traduccion desde el principio. Por el momento sugiero crear el grupo de trabajo que he denominado, Proyecto de Traduccion de la  Documentacion de Postgresql al Español, o en su forma mas abreviada por sus siglas en ingles (PSDP-es) por lo de PostgreSQL Spanish Documentation Project. ver documentos adjunto.

http://postgresql.org.pe/sites/default/files/introduccion%20psdp-es.pdf

http://postgresql.org.pe/sites/default/files/propuesta%20PSDP-es.pdf

Tambien sera necesario crear una lista para el proyecto de traduccion de la documentacion al español, ¿quien puede encargarse de eso?, no sé, si tambien un espacio propio en la pagina web de postgresql, no se, si esto es posible o como lo maneje el equipo de desarrollo de la pagina web. Ignoro como podemos incluir el texto del PSDP-es, en la pagina ¿quien puede hacer y averiguar esto?.

Sera largo y penoso el proceso de traducción, porque no contamos con la aplicacion de gestión en linea para la traducción, como en algún momento existió , creo, que ahora lo que necesitamos es sugerir pautas claras para comenzar este trabajo, pero si no tomamos una acción concreta, nunca se hará la traducción.

Por ahora este es el resumen de las tareas preliminares que creo se podrían realizarse para comenzar en firme con el proyecto de traducción:

1. Puesta en marcha del framework o en su defecto incorporar una herramienta que otra comunidad de postgresql haya utilizado para traducir la documentacion.

2.En caso de no poder utilizar un framework, definición de un gestor de tradición como  alternativa a nivel local para cada uno de los traductores.

  1. Consecución de un numero mínimo de traductores.

  2. Establecimiento de un revisor.

  3. Creación de una lista de correo para el proyecto de documentacion de postgresql al español y un espacio en la pagina web para alojar al projecto de traducción ( y la traducción de la pagina a su vez).

  4. Definición de receptores de traducciones en en texto plano (que serán llevadas luego a formato)

  5. Incorporación de sugerencias sobre el texto que organiza el PSDP-es.

Espero que con este esbozo podamos hacer algo y comenzar realmente.







Re: [GENERAL] RAISE NOTICE format in pgAdmin

Bill Todd wrote:
> If I have a series of RAISE NOTICE 'xxx' statements in a plpgsql
> function and I call the function from pgAdmin the notice messages are
> concatenated on a single line on the Messages tab. Is there any way to
> get each message to appear on a separate line?
>
> Is there a better way than using RAISE NOTICE to debug functions?
>
> Bill
>
Bill,
Make sure you are on at least version 8.2 (8.3 is preferred) and use a
plpgsql debugger. Later versions of Pgadmin have one built in and a
stand alone version is available from:
http://www.amsoftwaredesign.com/debugger_client_announce (built with
Delphi)

If you are using the win32 version there is a option at the end of the
installer script to install the debugger part. On 8.2 or 8.3 you will
need to install it yourself.
Please see: http://pgfoundry.org/projects/edb-debugger/

Hope that helps.

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of Lightning Admin for PostgreSQL


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[pgsql-www] categorizing mailing lists

Selena Deckelmann escribió:
> On Mon, Sep 1, 2008 at 11:10 AM, Alvaro Herrera
> <alvherre@commandprompt.com> wrote:

> > The previous version was using the Description field (or a transcription
> > thereof), but for all other lists it can be a long text.
> >
> > Should we add a "short description" column to the database?
>
> Please.

Okay, so I had a look at this and this is my conclusion: pgus-general
and pgeu-general are in the wrong category. I was just talking to
Stefan:

(10:27:05) alvherre@pg: so here's what I want to do
(10:27:10) alvherre@pg: I need a new column in the lists table
(10:27:13) alvherre@pg: which is a short description
(10:27:21) alvherre@pg: that text is what I would put in the left-side menu
(10:27:40) alvherre@pg: it is pretty obvious that in "user groups" I can just name the geographical region
(10:27:48) alvherre@pg: and in regional lists, I can name the language
(10:27:59) alvherre@pg: but I cannot do sh*t about pgus and pgeu :-(
(10:28:13) alvherre@pg: maybe we should create another category for those??

What do people think about creating a new category (list group) and
sticking pgus-general and pgeu-general on it? What should that category
be named?

Also, I take it nobody objects to the "short description" thing.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-www mailing list (pgsql-www@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-www

Re: [HACKERS] Out-of-tree compilation seems broken in HEAD (plpgsql)

Alvaro Herrera <alvherre@commandprompt.com> writes:
> Marko Kreen escribió:
>> I'll try with new cvs checkout.

> That'll have the same effect as make maintainer-clean, and should work
> equally well.

No, it'll work better. The real problem here is that in the CVS-HEAD
makefiles, "make maintainer-clean" fails to remove the *old* derived
files.

Perhaps we should have left the old filenames listed in the clean
targets...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] What is d2mdir?

On Tue, 2008-09-02 at 19:47 +0530, Abhijit Menon-Sen wrote:
> > [devrim@laptop sgml]$ make man
>
> As Alvaro noted recently,

I probably missed that.

> you need to use "make man D2MDIR=/some/path".

Thanks :)

Cheers.
--
Devrim GÜNDÜZ, RHCE
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org

Re: [SQL] order of rows in update

On Tue, Sep 2, 2008 at 2:58 AM, Achilleas Mantzios
<achill@matrix.gatewaynet.com> wrote:
> is there an (implicit) way to make a multirow update execute on some rows prior to other rows?
> It is needed in a case where a trigger is defined on the table as FOR EACH ROW, and it is mandatory
> that the trigger is run for some certain rows before it is run on the rest of the rows.
>
> Is there anything reliable to achieve this without making poor assumptions of the future
> versions, or should i just "SELECT ... ORDER BY ..." and then perform individual UPDATEs?

The only way that I know how to do this is to create a named cursor of
the rows that you want to update, and then for each record call

UPDATE ... FROM ... WHERE CURRENT OF cursorname;


But why are you even having this problem to begin with? What you are
describing sounds like a database normalization problem.


--
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Re: [pgsql-fr-generale] RETURN cannot have a parameter in function with OUT parameters

Christophe Chauvet a écrit :
> Guillaume Lelarge a écrit :
>> Quelle version de PostgreSQL ?
>> Tu peux nous montrer ta fonction ?
>>
> Autant pour moi, c'est sous Oracle que c'est possible :( ,
> effectivement PostgreSQL ne gère pas cette possibilité.
>

Pour une fois que c'est à mon avantage de ne pas connaître Oracle :)


--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com

--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

Re: [HACKERS] What is d2mdir?

At 2008-09-02 15:10:23 +0300, devrim@gunduz.org wrote:
>
> [devrim@laptop sgml]$ make man

As Alvaro noted recently, you need to use "make man D2MDIR=/some/path".

-- ams

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[pgsql-es-ayuda] pg_hba.conf erroneo

Reply-To:
In-Reply-To: <COL103-W30660998170DF5C7CFFC5C45F0@phx.gbl>

Wolfgang Rodriguez escribió:

> Hemos revisado el archivo pg_hba.conf y no tiene ningún error.

Evidentemente este es el paso que está fallando :-) Si quieres puedes
mostrar el archivo para ver dónde puede estar el error. Otra cosa que
puede pasar es que estén mirando el archivo equivocado. Ejecuta SHOW
hba_file para que te diga la ubicación del archivo que realmente está
leyendo el servidor.


PS: es mala educación hacer una pregunta en un hilo de conversación de
otra pregunta. Te sugiero que la próxima vez crees un mensaje nuevo,
con un "asunto" relevante a tu pregunta, en lugar de responder a un
mensaje anterior.

--
Alvaro Herrera http://www.advogato.org/person/alvherre
"The problem with the future is that it keeps turning into the present"
(Hobbes)
--
TIP 3: Si encontraste la respuesta a tu problema, publícala, otros te lo agradecerán

Re: [PERFORM] limit clause breaks query planner?

Thanks very much for your help Guillaume, I appreciate you spending time on
this.

> Well, if your have 95% of NULL actorid_ and 10% for each value of
> pooledactor_, then it makes sense to assume it will have to fetch
> about 150 rows to find the 15 awaited ones...

This is only true if the data is randomly distributed, which it isn't
unfortunately.

To any postgres developers reading this, two words: planning hints :-). In
the face of imperfect information it's not possible to write a perfect
planner, please give us the ability to use the heuristic information we have
as developers, and the planner will never know about. Even if we force
postgres to use queries that give sub-optimal performance some of the time,
once we can write sufficiently performant queries, we're happy. In cases
like this where postgres gets it very wrong (and this is just a very simple
query after all), well, we're screwed.

I'm going to try partitioning my database along the pooledactor_ column to
see if I can get reasonable performance for my purposes, even if I can't
reach 10 million rows.

Thanks
David

-----Original Message-----
From: Guillaume Cottenceau [mailto:gc@mnc.ch]
Sent: 02 September 2008 14:56
To: David West
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] limit clause breaks query planner?

"David West" <david.west 'at' cusppoint.com> writes:

> INFO: "jbpm_taskinstance": moved 1374243 row versions, truncated 166156
to
> 140279 pages

nothing which would explain so much planning off :/

> Yep, the table is from the jboss jbpm (business process management)
schema.

I've went to that kind of test then, but it didn't help much:

create table foo ( bar character varying(255), baz character varying(255),
id_ bigint NOT NULL,
class_ character(1) NOT NULL,
version_ integer NOT NULL,
name_ character varying(255),
description_ character varying(4000),
create_ timestamp without time zone,
start_ timestamp without time zone,
end_ timestamp without time zone,
duedate_ timestamp without time zone,
priority_ integer,
iscancelled_ boolean,
issuspended_ boolean,
isopen_ boolean,
issignalling_ boolean,
isblocking_ boolean,
task_ bigint,
token_ bigint,
procinst_ bigint,
swimlaninstance_ bigint,
taskmgmtinstance_ bigint,
processname_ character varying(255) );

insert into foo ( select generate_series(0, 10000000) / 1000000, case when
random() < 0.05 then 'Today Alcatel-Lucent has announced that Philippe Camus
is appointed non-executive Chairman and Ben Verwaayen is appointed Chief
Executive Officer.' else null end, 1, 'a', 1 );

create index foobaz on foo(baz);
create index foobar on foo(bar);
analyze foo;

Estimated costs still look correct on my side:

gc=# explain select * from foo where baz is null and bar in ('8') limit
15;
QUERY PLAN


----------------------------------------------------------------------------
--------
Limit (cost=0.00..0.46 rows=15 width=1795)
-> Index Scan using foobar on foo (cost=0.00..26311.70 rows=860238
width=1795)
Index Cond: ((bar)::text = '8'::text)
Filter: (baz IS NULL)
(4 rows)

gc=# set enable_indexscan = off;
SET
gc=# explain select * from foo where baz is null and bar in ('8') limit
15;
QUERY PLAN
----------------------------------------------------------------------
Limit (cost=0.00..3.46 rows=15 width=1795)
-> Seq Scan on foo (cost=0.00..198396.62 rows=860238 width=1795)
Filter: ((baz IS NULL) AND ((bar)::text = '8'::text))
(3 rows)


>>Btw, it would help if you could reproduce my test scenario and
>>see if PG uses "correctly" the indexscan. It is better to try on
>>your installation, to take care of any configuration/whatever
>>variation which may create your problem.
>
> I have tried your example and I get the same results as you.
>
> db=# explain select * from foo where baz is null and bar = '8' limit 15;
>
> QUERY PLAN
>
>
----------------------------------------------------------------------------
> ----
> ---
> Limit (cost=0.00..0.53 rows=15 width=154)
> -> Index Scan using foobar on foo (cost=0.00..33159.59 rows=934389
> width=15
> 4)
> Index Cond: (bar = 8)
> Filter: (baz IS NULL)
> (4 rows)
>
> db=# drop index foobar;
> DROP INDEX
> db=# explain select * from foo where baz is null and bar = '8' limit 15;
>
> QUERY PLAN
> ---------------------------------------------------------------------
> Limit (cost=0.00..2.87 rows=15 width=154)
> -> Seq Scan on foo (cost=0.00..178593.35 rows=934389 width=154)
> Filter: ((baz IS NULL) AND (bar = 8))
> (3 rows)
>
> It's choosing the index because of a cost of 0.53 vs a cost of 2.87 for
> sequential scan. I wonder why in my real tables the index scan cost is
> higher than the sequential scan cost. Perhaps because of the extra width
of
> my rows?

You may try to crosscheck with the new test I've put upper, but
I'm skeptical :/

I think I've unfortunately more than reached my level of
incompetence on that subject, sorry I wasn't able to better
locate your problem :/

>>> From looking at the plans, it seems to be postgres is assuming it will
>>> only
>>> have to sequentially scan 15 rows, which is not true in my case
>>> because column B is not distributed randomly (nor will it be in
>>> production). Would
>>
>>Why do you say that? The explanation seems to rather tell that it
>>(correctly) assumes that the seqscan would bring up about 1M rows for the
> selected values of A and B, and then it will limit to 15 rows.
>
> I say that because the plan gives a really really low number (3.21) for
the
> estimated cost after the limit on sequential scan:
>
> Select * from JBPM_TASKINSTANCE this_ where actorid_ is null and
> this_.POOLEDACTOR_ in ('21') limit 15
> "Limit (cost=0.00..3.21 rows=15 width=128) (actual
> time=84133.211..84187.247 rows=15 loops=1)"
> " -> Seq Scan on jbpm_taskinstance this_ (cost=0.00..234725.85
> rows=1095365 width=128) (actual time=84133.205..84187.186 rows=15
loops=1)"
> " Filter: ((actorid_ IS NULL) AND ((pooledactor_)::text =
> '21'::text))"
> "Total runtime: 84187.335 ms"
>
> It just seems to me it is not taking into account at all that it might
have
> to scan thousands or millions of rows before it gets the 15 rows it needs.

Well, if your have 95% of NULL actorid_ and 10% for each value of
pooledactor_, then it makes sense to assume it will have to fetch
about 150 rows to find the 15 awaited ones...

In the end, if PG doesn't know about data distribution, its
behavior makes total sense to me: 150 rows of width=128 bytes
need only 3 disk pages, so it shouldn't be faster than with a
seqscan, theoretically; however, I am not sure then why on my
simple "foo" test it isn't using the same decision..


Btw, that should not solve your problem, but normally, to help PG
choose indexscan often enough, it's good to reduce
random_page_cost which is 4 by default (a high value for nowadays
servers), increase effective_cache_size to what's available on
your machine, and potentially the shared_buffers which normally
helps for a good deal of matters, performance-wise.

--
Guillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company
Av. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [pgsql-fr-generale] RETURN cannot have a parameter in function with OUT parameters

Guillaume Lelarge a écrit :
> Quelle version de PostgreSQL ?
> Tu peux nous montrer ta fonction ?
>
Autant pour moi, c'est sous Oracle que c'est possible :( ,
effectivement PostgreSQL ne gère pas cette possibilité.

Christophe.
P.S. j'avais oublié de mettre la liste en copie :)

--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

Re: [HACKERS] Out-of-tree compilation seems broken in HEAD (plpgsql)

Marko Kreen escribió:
> On 9/2/08, Alvaro Herrera <alvherre@commandprompt.com> wrote:
> > Marko Kreen escribió:

> > > error: pl_gram.h: No such file or directory
> >
> > Try running "make maintainer-clean" -- see
> >
> > http://archives.postgresql.org/message-id/20080829162252.GG3983%40alvh.no-ip.org
>
> Note I started with empty tree..

The build tree may be empty, but the source tree contains the derived
files. I urge you to read the whole thread I linked.

> I'll try with new cvs checkout.

That'll have the same effect as make maintainer-clean, and should work
equally well.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Out-of-tree compilation seems broken in HEAD (plpgsql)

On 9/2/08, Alvaro Herrera <alvherre@commandprompt.com> wrote:
> Marko Kreen escribió:
> > $ mkdir build
> > $ cd build
> > $ ../PostgreSQL.dev/configure
> > $ make
> > [...]
> > gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
> > -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
> > -fwrapv -fpic -I/home/marko/src/build/../PostgreSQL.dev/src/pl/plpgsql/src
> > -I../../../../src/include
> > -I/home/marko/src/build/../PostgreSQL.dev/src/include -D_GNU_SOURCE
> > -c -o pl_comp.o
> > /home/marko/src/build/../PostgreSQL.dev/src/pl/plpgsql/src/pl_comp.c
> > /home/marko/src/build/../PostgreSQL.dev/src/pl/plpgsql/src/pl_comp.c:20:21:
> > error: pl_gram.h: No such file or directory
>
>
> Try running "make maintainer-clean" -- see
>
> http://archives.postgresql.org/message-id/20080829162252.GG3983%40alvh.no-ip.org

Note I started with empty tree..

I'll try with new cvs checkout.

--
marko

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

RE: [pgsql-es-ayuda] Re: [pgsql-es-ayuda] Re: [pgsql-es-ayuda] Manual de instalación

Buenos días y saludos a todos.
 
Aunque tengo tiempo leyendo los mensajes de esta lista, es la primera vez que escribo.
Se me presenta la siguiente situación:
 
En mi oficina estamos creando una aplicación en Windows utilizando PHP con Apache y Postgres.
 
Al hacer la conexión desde PHP a Postgres, utilizamos como host una dirección IP, por ejemplo 10.1.0.201, pero al accesar via browser la aplicción nos muestra el siguiente error:
 
Warning: pg_connect() [function.pg-connect]: Unable to connect to PostgreSQL server: FATAL: No se encuentra pg_hba.conf o el archivo es incorrecto HINT: Vea el registro del servidor para obtener más detalles. in C:\wamp\www\AutoServicios\Librerias\Conex_autoservicios.php on line 21

Como pueden opbservar, utilizamos Wamp en las pc locales para programar las aplicaciones y las definitivas las publicamos en un servidor con linux y apache y en otro servidor tenemos postgres corriendo en linux.
 
En la linea 21 del programa PHP tenemos la siguiente instrucción:
 
$B_idconexion=pg_connect("host=$B_strhost dbname=$B_strbd port=$B_strpuerto user=$B_strusuario password=$B_strpassword");

Donde, como les dije anteriormente, $B_strhost es una IP local.
 
Hemos revisado el archivo pg_hba.conf y no tiene ningún error.
 
Les agradecería mucho nos ayudaran al respecto.
Si necesitan otra información, solo dígannos y gustosos se la remitiremos.
 
Saludos,
 
Wolfgang Rodriguez.
 


La cartera, las gafas. ¿te falta algo? Ahora llévate Messenger en tu móvil

Re: [PERFORM] limit clause breaks query planner?

"David West" <david.west 'at' cusppoint.com> writes:

> INFO: "jbpm_taskinstance": moved 1374243 row versions, truncated 166156 to
> 140279 pages

nothing which would explain so much planning off :/

> Yep, the table is from the jboss jbpm (business process management) schema.

I've went to that kind of test then, but it didn't help much:

create table foo ( bar character varying(255), baz character varying(255),
id_ bigint NOT NULL,
class_ character(1) NOT NULL,
version_ integer NOT NULL,
name_ character varying(255),
description_ character varying(4000),
create_ timestamp without time zone,
start_ timestamp without time zone,
end_ timestamp without time zone,
duedate_ timestamp without time zone,
priority_ integer,
iscancelled_ boolean,
issuspended_ boolean,
isopen_ boolean,
issignalling_ boolean,
isblocking_ boolean,
task_ bigint,
token_ bigint,
procinst_ bigint,
swimlaninstance_ bigint,
taskmgmtinstance_ bigint,
processname_ character varying(255) );

insert into foo ( select generate_series(0, 10000000) / 1000000, case when random() < 0.05 then 'Today Alcatel-Lucent has announced that Philippe Camus is appointed non-executive Chairman and Ben Verwaayen is appointed Chief Executive Officer.' else null end, 1, 'a', 1 );

create index foobaz on foo(baz);
create index foobar on foo(bar);
analyze foo;

Estimated costs still look correct on my side:

gc=# explain select * from foo where baz is null and bar in ('8') limit 15;
QUERY PLAN
------------------------------------------------------------------------------------
Limit (cost=0.00..0.46 rows=15 width=1795)
-> Index Scan using foobar on foo (cost=0.00..26311.70 rows=860238 width=1795)
Index Cond: ((bar)::text = '8'::text)
Filter: (baz IS NULL)
(4 rows)

gc=# set enable_indexscan = off;
SET
gc=# explain select * from foo where baz is null and bar in ('8') limit 15;
QUERY PLAN
----------------------------------------------------------------------
Limit (cost=0.00..3.46 rows=15 width=1795)
-> Seq Scan on foo (cost=0.00..198396.62 rows=860238 width=1795)
Filter: ((baz IS NULL) AND ((bar)::text = '8'::text))
(3 rows)


>>Btw, it would help if you could reproduce my test scenario and
>>see if PG uses "correctly" the indexscan. It is better to try on
>>your installation, to take care of any configuration/whatever
>>variation which may create your problem.
>
> I have tried your example and I get the same results as you.
>
> db=# explain select * from foo where baz is null and bar = '8' limit 15;
>
> QUERY PLAN
>
> ----------------------------------------------------------------------------
> ----
> ---
> Limit (cost=0.00..0.53 rows=15 width=154)
> -> Index Scan using foobar on foo (cost=0.00..33159.59 rows=934389
> width=15
> 4)
> Index Cond: (bar = 8)
> Filter: (baz IS NULL)
> (4 rows)
>
> db=# drop index foobar;
> DROP INDEX
> db=# explain select * from foo where baz is null and bar = '8' limit 15;
>
> QUERY PLAN
> ---------------------------------------------------------------------
> Limit (cost=0.00..2.87 rows=15 width=154)
> -> Seq Scan on foo (cost=0.00..178593.35 rows=934389 width=154)
> Filter: ((baz IS NULL) AND (bar = 8))
> (3 rows)
>
> It's choosing the index because of a cost of 0.53 vs a cost of 2.87 for
> sequential scan. I wonder why in my real tables the index scan cost is
> higher than the sequential scan cost. Perhaps because of the extra width of
> my rows?

You may try to crosscheck with the new test I've put upper, but
I'm skeptical :/

I think I've unfortunately more than reached my level of
incompetence on that subject, sorry I wasn't able to better
locate your problem :/

>>> From looking at the plans, it seems to be postgres is assuming it will
>>> only
>>> have to sequentially scan 15 rows, which is not true in my case
>>> because column B is not distributed randomly (nor will it be in
>>> production). Would
>>
>>Why do you say that? The explanation seems to rather tell that it
>>(correctly) assumes that the seqscan would bring up about 1M rows for the
> selected values of A and B, and then it will limit to 15 rows.
>
> I say that because the plan gives a really really low number (3.21) for the
> estimated cost after the limit on sequential scan:
>
> Select * from JBPM_TASKINSTANCE this_ where actorid_ is null and
> this_.POOLEDACTOR_ in ('21') limit 15
> "Limit (cost=0.00..3.21 rows=15 width=128) (actual
> time=84133.211..84187.247 rows=15 loops=1)"
> " -> Seq Scan on jbpm_taskinstance this_ (cost=0.00..234725.85
> rows=1095365 width=128) (actual time=84133.205..84187.186 rows=15 loops=1)"
> " Filter: ((actorid_ IS NULL) AND ((pooledactor_)::text =
> '21'::text))"
> "Total runtime: 84187.335 ms"
>
> It just seems to me it is not taking into account at all that it might have
> to scan thousands or millions of rows before it gets the 15 rows it needs.

Well, if your have 95% of NULL actorid_ and 10% for each value of
pooledactor_, then it makes sense to assume it will have to fetch
about 150 rows to find the 15 awaited ones...

In the end, if PG doesn't know about data distribution, its
behavior makes total sense to me: 150 rows of width=128 bytes
need only 3 disk pages, so it shouldn't be faster than with a
seqscan, theoretically; however, I am not sure then why on my
simple "foo" test it isn't using the same decision..


Btw, that should not solve your problem, but normally, to help PG
choose indexscan often enough, it's good to reduce
random_page_cost which is 4 by default (a high value for nowadays
servers), increase effective_cache_size to what's available on
your machine, and potentially the shared_buffers which normally
helps for a good deal of matters, performance-wise.

--
Guillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company
Av. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] Out-of-tree compilation seems broken in HEAD (plpgsql)

Marko Kreen escribió:
> $ mkdir build
> $ cd build
> $ ../PostgreSQL.dev/configure
> $ make
> [...]
> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
> -fwrapv -fpic -I/home/marko/src/build/../PostgreSQL.dev/src/pl/plpgsql/src
> -I../../../../src/include
> -I/home/marko/src/build/../PostgreSQL.dev/src/include -D_GNU_SOURCE
> -c -o pl_comp.o
> /home/marko/src/build/../PostgreSQL.dev/src/pl/plpgsql/src/pl_comp.c
> /home/marko/src/build/../PostgreSQL.dev/src/pl/plpgsql/src/pl_comp.c:20:21:
> error: pl_gram.h: No such file or directory

Try running "make maintainer-clean" -- see

http://archives.postgresql.org/message-id/20080829162252.GG3983%40alvh.no-ip.org

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH] Cleanup of GUC units code

On 9/2/08, Peter Eisentraut <peter_e@gmx.net> wrote:
> Gregory Stark wrote:
> > Peter Eisentraut <peter_e@gmx.net> writes:
> > > Marko Kreen wrote:
> > > > In the meantime, here is simple patch for case-insensivity.
> > > >
> > > You might be able to talk me into accepting various unambiguous, common
> > > alternative spellings of various units. But for instance allowing MB
> and Mb to
> > > mean the same thing is insane.
> > >
> >
> > Because you think some user will be trying to specify their shared_buffers
> in
> > bits?
> >
>
> My concern is that this information does not stay in the configuration
> files. It will invariably leak out into whitepapers, presentations, product
> documentation, and before long there will be confusion about why you can't
> stuff N Mb over an N Mb connection. I am not making this up.

Uh. So you want force "proper" units in presentations at the price
of everyday admin operations? Does not seem like a sensible tradeoff.

--
marko

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH] Cleanup of GUC units code

Gregory Stark wrote:
> Peter Eisentraut <peter_e@gmx.net> writes:
>
>> Marko Kreen wrote:
>>> In the meantime, here is simple patch for case-insensivity.
>> You might be able to talk me into accepting various unambiguous, common
>> alternative spellings of various units. But for instance allowing MB and Mb to
>> mean the same thing is insane.
>
> Because you think some user will be trying to specify their shared_buffers in
> bits?

My concern is that this information does not stay in the configuration
files. It will invariably leak out into whitepapers, presentations,
product documentation, and before long there will be confusion about why
you can't stuff N Mb over an N Mb connection. I am not making this up.

Mb does not add any typing ease (as "KB" might) or readability (as "sec"
might), and there is no respectable source that will claim it is an
acceptable alias for MB.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] WIP Join Removal

On Tue, 2008-09-02 at 12:05 +0100, Gregory Stark wrote:

> I wonder if it would be more worthwhile to remove them and have a subsequent
> phase where we look for possible joins to *add*. So even if the user writes
> "select * from invoices where customer_id=?" the planner might be able to
> discover that it can find those records quicker by scanning customer, finding
> the matching <company_id,customer_id> and then using an index to look them up
> in invoices.

This seems a less useful idea now just simply because it is such a
special case.

We would need to have a case where we have a table A that does not have
an index on a specific column, yet table B does have an index on the
specific column. But also when A references B as a foreign key and where
the column is a subset of the columns of the primary key of B.

That means only queries like

select ...
from a
where a.col2 = x;

can be transformed into

select ...
from a join b on (foreign key cols)
where a.col2 = x;

and then because a.col2 is a subset of foreign key columns we can infer
that b.col2 = x.

So the pre-conditions for this to be useful are:
* constraint on subset of a FK
* subset of FK is indexed on B
* subset of FK is not indexed on A

Which doesn't seem that likely to occur.


Thanks both to Heikki and Greg for good, fast input on this patch.
Nothing more needed now while I rework patch.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

[COMMITTERS] stackbuilder - wizard: wxWidgets is required

Log Message:
-----------
wxWidgets is required

Modified Files:
--------------
wizard:
CMakeLists.txt (r1.19 -> r1.20)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/CMakeLists.txt.diff?r1=1.19&r2=1.20)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] WIP patch: Collation support

Martijn van Oosterhout wrote:
> On Tue, Sep 02, 2008 at 02:50:47PM +0300, Peter Eisentraut wrote:
>> Radek Strnad wrote:
>>> - new collations can be defined with command CREATE COLLATION <collation
>>> name> FOR <character set specification> FROM <existing collation name>
>>> [STRCOLFN <fn name>]
>>> [ <pad characteristic> ] [ <case sensitive> ] [ LCCOLLATE <lc_collate> ]
>>> [ LCCTYPE <lc_ctype> ]
>> How do you plan to make a collation case sensitive or accent sensitive?
>> I have previously commented that this is not a realistic view on how
>> collations work. Since you are apparently planning to use the system
>> locales, I don't see how you can make this work.
>
> While it's true POSIX locales don't handle this, other collation
> libraries do and we should support them if the user wants.

Do they handle exactly those two attributes specifically? Can you point
out references? Or do you mean, other collation libraries allow their
collations to be configured/customized? I think linguistically it is a
very narrow view of the world to hardcode those two attributes.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [PATCH] Cleanup of GUC units code

On 9/2/08, Peter Eisentraut <peter_e@gmx.net> wrote:
> Marko Kreen wrote:
> > In the meantime, here is simple patch for case-insensivity.
>
> You might be able to talk me into accepting various unambiguous, common
> alternative spellings of various units. But for instance allowing MB and Mb
> to mean the same thing is insane.

How would the docs for that look like? And anyway, what is wrong with
Mb for megabytes? mB may be slightly weird but if some user likes it,
I see no reason to reject it.

You do realize that misspelling unit name can cause downtime of several
minutes instead of couple seconds? We can easily do restart in couple of
seconds but the restart, look logs, launch editor, find value, change,
save, restart cycle will take quite a lot more. Why should we increase
the chance that any config edit causes problems?

Secondly, humans don't have byte-exact memory, instead they generalize
and deduce (eg. from nearby parameters). Thus remembering "KB, MB, GB"
or "kb, mb, gb" is easier than remembering "kB, MB, GB". Also remembering
"ms, s, m, h, d" is easier than "ms, s, min, h, d". (I'm not proposing 'm'
for minutes, just noting, that as we shouldn't ever introduce 'month' unit
for core config values, the 'm' as minutes is preferable for 'min'.)

Thirdly, please don't use "standard units" argument, unless you plan to
propose use of "KiB, MiB, GiB" at the same moment. The units will be
remembered as units-for-postgresql.conf. It is good if they conform to
user expectations, but it's even more important they are easy to remember.

Fourth, for the same reason, it is preferable the amount of units
accepted stays small. Again, to be easily rememberable. This
also kills any chance "mb" can be confused with "milli-bits".

If any extension/module wants to use any other units or full-SI,
it can but that should not extend to core config values. Again,
to reduce chance for confusion.

So, are there any other arguments for keeping current behaviour?

--
marko

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PATCHES] WIP Join Removal

Simon Riggs wrote:
> It seems I wrote my original tests using "and" instead of "where" and
> hadn't noticed the distinction. Thanks for helping me catch that error.

Ah, yeah, that's a big difference. Proving correctness is hard, but to
refute something you need just one test case that fails ;-).

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [PATCHES] WIP Join Removal

Gregory Stark wrote:
> I wonder if it would be more worthwhile to remove them and have a subsequent
> phase where we look for possible joins to *add*. So even if the user writes
> "select * from invoices where customer_id=?" the planner might be able to
> discover that it can find those records quicker by scanning customer, finding
> the matching <company_id,customer_id> and then using an index to look them up
> in invoices.

Yeah, that would be cool. The question is whether it's worth the
additional overhead in planner, compared to the gain in the rare case
that it's applicable. That's always the thing with planner tricks like
this. I think we'll eventually need some sort of tuning knob to control
how hard the planner tries to apply different optimizations like that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Re: [HACKERS] Page layout footprint

Zdenek Kotala wrote:
> Hi Heikki,
>
> I'm sorry for lack of explanation. It is my fault.
>
> Heikki says (on commit fest wiki):
> ------------
> I believe I debunked this patch enough already. Apparently there's some
> compatibility issue between 32-bit and 64-bit Sparcs, but this patch
> didn't catch that. It doesn't seem like this provides any extra safeness
> or better error messages. If I'm missing something, please provide more
> details on what scenario we currently have a problem, and how this helps
> with it.
> ------------
>
> The original proposal
> (http://archives.postgresql.org/message-id/489FC8E1.9090307@sun.com)
> contains two parts. First part is implementation of --footprint cmd line
> switch which shows you page layout structures footprint. It is useful
> for development (mostly for in-place upgrade) and also for manual data
> recovery when you need to know exact structures. Second part was add
> this information also into pg_control.file, but how you correctly
> mentioned there is not real use case to do it at this moment.
>
> However, there is still --footprint switch which is useful and it is
> reason why I put it on wiki for review and feedback. The switch could
> also use in build farm for collecting footprints from build farm members.

Ok, understood. I'll take another look from that point of view.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[COMMITTERS] stackbuilder - wizard: Always link with default wxWidgets libraries on

Log Message:
-----------
Always link with default wxWidgets libraries on Windows

Modified Files:
--------------
wizard:
CMakeLists.txt (r1.18 -> r1.19)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/CMakeLists.txt.diff?r1=1.18&r2=1.19)
wizard/cmake:
FindWX.cmake (r1.2 -> r1.3)
(http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/stackbuilder/wizard/cmake/FindWX.cmake.diff?r1=1.2&r2=1.3)

--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Re: [HACKERS] Window functions patch v04 for the September commit fest

Martijn van Oosterhout wrote:
> On Tue, Sep 02, 2008 at 10:44:31AM +0100, Simon Riggs wrote:
>> If we only have the combined (brain * time) to get a partial
>> implementation in for this release then I would urge we go for that,
>> rather than wait for perfection - as long as there are no other negative
>> effects.
>
> "premature optimization is the root of all evil." (Knuth, Donald.)
>
> "make it work, then make it better".
>
> Getting a partial implementation that works out now is far better than
> waiting until its perfect.

Sure. Just have to watch out that we don't follow a dead-end, making it
harder to add missing functionality later on.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

[GENERAL] SQL equivalent to \dT

Hi all,

if I want to get a list of types (ie., data types or enums), then I can
use the '\dT' command from within the postgreSQL client.

However, I cannot seem to figure out what the SQL alternative is to the
\dT command, so that I might get a list of types scriptable by SQL.

For example, if I create an ENUM myself:
CREATE TYPE bird AS ENUM('duck','goose');

a quick look through the various parts of the information schema did not
reveal in which place this enum is stored. Is the information schema the
correct place to look for this? Which SQL statement do I need to get a
list of user-defined types?

thanks in advance,

Bram Kuijper

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general