| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
in the TupleDesc that the caller already has (for call from ExecMain) or
can make just as easily as ExecInitJunkFilter() can (for call from
ExecAppend). Also, don't bother to build a junk filter for an INSERT
operation that doesn't actually need one, which is the normal case.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following patch extends the COMMENT ON functionality to the
rest of the database objects beyond just tables, columns, and views. The
grammer of the COMMENT ON statement now looks like:
COMMENT ON [
[ DATABASE | INDEX | RULE | SEQUENCE | TABLE | TYPE | VIEW ] <objname>
|
COLUMN <relation>.<attribute> |
AGGREGATE <aggname> <aggtype> |
FUNCTION <funcname> (arg1, arg2, ...) |
OPERATOR <op> (leftoperand_typ rightoperand_typ) |
TRIGGER <triggername> ON relname>
Mike Mascari
(mascarim@yahoo.com)
|
| |
|
|
|
|
|
| |
eliminating some wildly inconsistent coding in various parts of the
system. I set MAXPGPATH = 1024 in config.h.in. If anyone is really
convinced that there ought to be a configure-time test to set the
value, go right ahead ... but I think it's a waste of time.
|
| |
|
|
|
|
|
|
|
| |
when an initdb-forcing change has been applied within a development cycle.
PG_VERSION serves this purpose for official releases, but we can't bump
the PG_VERSION number every time we make a change to the catalogs during
development. Instead, increase the catalog version number to warn other
developers that you've made an incompatible change. See my mail to
pghackers for more info.
|
| |
|
|
| |
might think ... in fact doesn't do much of anything at the moment ...
|
| |
|
|
| |
pg_dump and interfaces/odbc still need some work.)
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
a generalized module 'tuplesort.c' that can sort either HeapTuples or
IndexTuples, and is not tied to execution of a Sort node. Clean up
memory leakages in sorting, and replace nbtsort.c's private implementation
of mergesorting with calls to tuplesort.c.
|
| |
|
|
|
|
| |
recycle storage within sort temp file on a block-by-block basis. This
reduces peak disk usage to essentially just the volume of data being
sorted, whereas it had been about 4x the data volume before.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
>From the ORACLE 7 SQL Language Reference Manual:
-----------------------------------------------------
COMMENT
Purpose:
To add a comment about a table, view, snapshot, or
column into the data dictionary.
Prerequisites:
The table, view, or snapshot must be in your own
schema
or you must have COMMENT ANY TABLE system privilege.
Syntax:
COMMENT ON [ TABLE table ] |
[ COLUMN table.column] IS 'text'
You can effectively drop a comment from the database
by setting it to the empty string ''.
-----------------------------------------------------
Example:
COMMENT ON TABLE workorders IS
'Maintains base records for workorder information';
COMMENT ON COLUMN workorders.hours IS
'Number of hours the engineer worked on the task';
to drop a comment:
COMMENT ON COLUMN workorders.hours IS '';
The current patch will simply perform the insert into
pg_description, as per the TODO. And, of course, when
the table is dropped, any comments relating to it
or any of its attributes are also dropped. I haven't
looked at the ODBC source yet, but I do know from
an ODBC client standpoint that the standard does
support the notion of table and column comments.
Hopefully the ODBC driver is already fetching these
values from pg_description, but if not, it should be
trivial.
Hope this makes the grade,
Mike Mascari
(mascarim@yahoo.com)
|
| |
|
|
|
|
|
|
| |
BufFile so that it handles multi-segment temporary files transparently.
This allows sorts and hashes to work with data exceeding 2Gig (or whatever
the local limit on file size is). Change psort.c to use relative seeks
instead of absolute seeks for backwards scanning, so that it won't fail
when the data volume exceeds 2Gig.
|
| |
|
|
|
|
|
|
|
|
|
| |
Cygwin snapshots (tested on 990115 which is recommended to use - it fixes
some errors in B20.1)
And I have another patch for including <sys/ipc.h> before <sys/sem.h> in
backend/storage/lmgr/proc.c - it is required due the design of cygipc
headers
Dan
|
| |
|
|
|
| |
Now WHERE restriction on ctid is allowed though it is
sequentially scanned.
|
| | |
|
| |
|
|
|
| |
up debugging options for postmaster and postgres programs. postmaster
-d is no longer optional. Documentation updates.
|
| |
|
|
|
|
|
|
|
|
|
| |
mentioned in FROM but not elsewhere in the query: such tables should be
joined over anyway. Aside from being more standards-compliant, this allows
removal of some very ugly hacks for COUNT(*) processing. Also, allow
HAVING clause without aggregate functions, since SQL does. Clean up
CREATE RULE statement-list syntax the same way Bruce just fixed the
main stmtmulti production.
CAUTION: addition of a field to RangeTblEntry nodes breaks stored rules;
you will have to initdb if you have any rules.
|
| |
|
|
|
| |
First step in cleaning up backend initialization code.
Fix for FATAL: now FATAL is ERROR + exit.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
expressions in CREATE TABLE. There is no longer an emasculated expression
syntax for these things; it's full a_expr for constraints, and b_expr
for defaults (unfortunately the fact that NOT NULL is a part of the
column constraint syntax causes a shift/reduce conflict if you try a_expr.
Oh well --- at least parenthesized boolean expressions work now). Also,
stored expression for a column default is not pre-coerced to the column
type; we rely on transformInsertStatement to do that when the default is
actually used. This means "f1 datetime default 'now'" behaves the way
people usually expect it to.
BTW, all the support code is now there to implement ALTER TABLE ADD
CONSTRAINT and ALTER TABLE ADD COLUMN with a default value. I didn't
actually teach ALTER TABLE to call it, but it wouldn't be much work.
|
| |
|
|
|
|
|
| |
not just C, so that ISCACHABLE attribute can be specified for user-defined
functions. Get rid of ParamString node type, which wasn't actually being
generated by gram.y anymore, even though define.c thought that was what
it was getting. Clean up minor bug in dfmgr.c (premature heap_close).
|
| |
|
|
| |
works if finite() is a function. Patch from Christof Petig.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
expression_tree_mutator rather than ad-hoc tree walking code. This shortens
the code materially and fixes a fair number of sins of omission. Also,
change modifyAggrefQual to *not* recurse into subselects, since its mission
is satisfied if it removes aggregate functions from the top level of a
WHERE clause. This cures problems with queries of the form SELECT ...
WHERE x IN (SELECT ... HAVING something-using-an-aggregate), which would
formerly get mucked up by modifyAggrefQual. The routine is still
fundamentally broken, of course, but I don't think there's any way to get
rid of it before we implement subselects in FROM ...
|
| |
|
|
|
|
|
|
| |
FOREIGN KEY triggers.
Added pg_proc entries for all the new functions.
Jan
|
| |
|
|
| |
Jan
|
| |
|
|
| |
Jan
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Implements the CREATE CONSTRAINT TRIGGER and SET CONSTRAINTS commands.
TODO:
Generic builtin trigger procedures
Automatic execution of appropriate CREATE CONSTRAINT... at CREATE TABLE
Support of new trigger type in pg_dump
Swapping of huge # of events to disk
Jan
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
functions. One problem that I have encountered with the function
manager is that it does not allow the user to define type conversion
functions that convert between user types. For instance if mytype1,
mytype2, and mytype3 are three Postgresql user types, and if I wish to
define Postgresql conversion functions like
I run into problems, because the Postgresql dynamic loader would look
for a single link symbol, mytype3, for both pieces of object code. If
I just change the name of one of the Postgresql functions (to make the
symbols distinct), the automatic type conversion that Postgresql uses,
for example, when matching operators to arguments no longer finds the
type conversion function.
The solution that I propose, and have implemented in the attatched
patch extends the CREATE FUNCTION syntax as follows. In the first case
above I use the link symbol mytype2_to_mytype3 for the link object
that implements the first conversion function, and define the
Postgresql operator with the following syntax
The patch includes changes to the parser to include the altered
syntax, changes to the ProcedureStmt node in nodes/parsenodes.h,
changes to commands/define.c to handle the extra information in the AS
clause, and changes to utils/fmgr/dfmgr.c that alter the way that the
dynamic loader figures out what link symbol to use. I store the
string for the link symbol in the prosrc text attribute of the pg_proc
table which is currently unused in rows that reference dynamically
loaded
functions.
Bernie Frankpitt
|
| | |
|
| |
|
|
| |
It doesn't work currently but also don't break anything -:)
|
| |
|
|
|
|
|
|
|
|
| |
When drawing up a very simple "text-drawing" of how the negotiation is done,
I realised I had done this last part (fallback) in a very stupid way. Patch
#4 fixes this, and does it in a much better way.
Included is also the simple text-drawing of how the negotiation is done.
//Magnus
|
| |
|
|
|
|
|
|
| |
with no input rows, per pghackers discussions around 7/22/99. Clean up
a bunch of ugly coding while at it; remove redundant re-lookup of
aggregate info at start of each new GROUP. Arrange to pfree intermediate
values when they are pass-by-ref types, so that aggregates on pass-by-ref
types no longer eat memory. This takes care of a couple of TODO items...
|
| |
|
|
|
|
|
|
|
|
|
| |
Frankpitt, plus some improvements from yours truly. The simplifier depends
on the proiscachable field of pg_proc to tell it whether a function is
safe to pre-evaluate --- things like nextval() are not, for example.
Update pg_proc.h to contain reasonable cacheability information; as of
6.5.* hardly any functions were marked cacheable. I may have erred too
far in the other direction; see recent mail to pghackers for more info.
This update does not force an initdb, exactly, but you won't see much
benefit from the simplifier until you do one.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Buffer refcount cleanup (per my "progress report" to pghackers, 9/22).
* Add links to backend PROC structs to sinval's array of per-backend info,
and use these links for routines that need to check the state of all
backends (rather than the slow, complicated search of the ShmemIndex
hashtable that was used before). Add databaseOID to PROC structs.
* Use this to implement an interlock that prevents DESTROY DATABASE of
a database containing running backends. (It's a little tricky to prevent
a concurrently-starting backend from getting in there, since the new
backend is not able to lock anything at the time it tries to look up
its database in pg_database. My solution is to recheck that the DB is
OK at the end of InitPostgres. It may not be a 100% solution, but it's
a lot better than no interlock at all...)
* In ALTER TABLE RENAME, flush buffers for the relation before doing the
rename of the physical files, to ensure we don't get failures later from
mdblindwrt().
* Update TRUNCATE patch so that it actually compiles against current
sources :-(.
You should do "make clean all" after pulling these changes.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
additional argument specifying the kind of lock to acquire/release (or
'NoLock' to do no lock processing). Ensure that all relations are locked
with some appropriate lock level before being examined --- this ensures
that relevant shared-inval messages have been processed and should prevent
problems caused by concurrent VACUUM. Fix several bugs having to do with
mismatched increment/decrement of relation ref count and mismatched
heap_open/close (which amounts to the same thing). A bogus ref count on
a relation doesn't matter much *unless* a SI Inval message happens to
arrive at the wrong time, which is probably why we got away with this
sloppiness for so long. Repair missing grab of AccessExclusiveLock in
DROP TABLE, ALTER/RENAME TABLE, etc, as noted by Hiroshi.
Recommend 'make clean all' after pulling this update; I modified the
Relation struct layout slightly.
Will post further discussion to pghackers list shortly.
|
| |
|
|
|
|
|
|
| |
conditions. There are some pretty bogus heuristics in prepqual.c that
try to decide whether to output CNF or DNF format; they need to be replaced,
likely. Right now the code is probably too willing to choose DNF form,
which might hurt performance in some cases that used to work OK.
But at least we have a foundation to build on.
|
| |
|
|
|
|
|
|
|
| |
in or_normalize, remove detection of duplicate subexpressions (since it's
highly unlikely to be worth the amount of time it takes), and introduce
a dnfify() entry point so that unintelligible backwards logic in UNION
processing can be eliminated. This is just an intermediate step ---
next thing is to look at not forcing the qual into CNF form when it would
be better off in DNF form.
|
| |
|
|
|
| |
and pg_server_to_client. Eliminate copy.c's restriction on the length
of a single attribute.
|
| |
|
|
|
|
|
|
|
| |
This change seems necessary in conjunction with long queries, and it
cleans up some bogosity in connection with long EXPLAIN texts anyway.
Note that current libpq will accept any length error message (at least
until it runs out of memory); prior versions have a limit of 8K, but
will cleanly discard excess error text, so there shouldn't be any
big compatibility problems with old clients.
|
| |
|
|
|
|
| |
transaction abort --- before it only worked if there was exactly one level
of allocation context stacked in the blank portal. Now it does the right
thing for any depth, including zero...
|
| |
|
|
|
|
|
|
|
|
| |
before comparison; if fields being joined are different widths then hashing
will yield wrong answer. Also, remove hashjoinable mark from all uses of
array_eq, because array structures may have padding bytes between elements
and the pad bytes are of uncertain content. This could be revisited if
array code is cleaned up.
Modify opr_sanity regress test to complain if array_eq operator is marked
hashjoinable.
|
| |
|
|
|
|
| |
offended my aesthestic sensibility that there was so much unreadable code
doing so little. Rewritten code is about half the size, faster, and
(I hope) much more intelligible.
|
| | |
|