| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
|
| |
a mempool (used for result sets).
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
as it is in when compiled from source and the default for mysqlnd.
SuSE for example uses /var/run/mysql/mysql.sock . Also, sql.safe_mode
(ext/mysql and ingres) needs the socket.
Fix possible crashes in mysqlnd. When packets are shorter, functions should
return error.
|
| |
|
|
|
|
|
|
| |
needed to move to a new structure MYSQLND_STMT. Makes
the code cleaner and less error-prone.
Also fix PDO/MySQL which directly touch mysqlnd internals
instead of using API calls.
|
| | |
|
| |
|
|
|
|
| |
We are anyway breaking the internal ABI in 5.3.2 so this won't hurt
and make us prepared for the future.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
in half by smartly introducing 2 new macros. Make MYSQLND::stats a pointer
from being aggregated and add triggers.
|
| |
|
|
|
|
|
|
| |
If the protocol gets changed, ever, we can decide at runtime
easily which protocol to use by instantiating the right protocol
object. But this is restricted to the structure of the packets, not
the flow.
|
| | |
|
| |
|
|
|
| |
Things like this can be built on top of the core.
|
| |
|
|
|
|
| |
due to problems on windows, which were not debugged. Better have
code that is disabled not in the core.
|
| |
|
|
|
|
| |
on a upper level and by offloading it we reduce the complexity of
the core.
|
| |
|
|
| |
build
|
| | |
|
| | |
|
| |
|
|
|
| |
statistics.
|
| |
|
|
|
| |
functions.
|
| | |
|
| |
|
|
|
| |
mysqlnd: mysql_num_fields returns wrong column count for mysql_list_fields
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Zend allocator, which means that is easier to hit memory_limit if you
have big stored (buffered) result sets. Before with libmysql you won't
hit memory_limit because libmysql uses libc's allocator and nothing is
checked. Now, with mysqlnd the situation is stricter and it is easier to
hit memory_limit. We try to optimize for big result sets. If a result set
is larger than 10 rows we will start freeing some data to keep memory usage
after 10 rows constant. This will help in the cases where a buffered result
set is scrolled forward only and just only once, or mysqlnd will need to
decode data from the network buffers again - yes, it is a trade-off between
CPU time and memory size. The best for big result sets is of course using
unbuffered queries - for comparison : 3 Million rows with buffered take
at least 180MB, with buffered you will stay at 3MB, and unbuffered will be
just 7-8% slower.
|
| |
|
|
|
|
|
|
|
| |
callbacks, it was done for making 2 functions static, not to pollute the
global functions space but that had its price of 8 bytes overheat per
allocation, which is just too much. Also making the app member 32b instead
of 64b, which should save additional 4 byte, to the total of 12 byte per
allocation of a row buffer.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
some reason. Double free of the data, which led to valgrind warnigns.
The fix actually optimizes the code in this cases because the old code
used copy_ctor while the new one skips it because it is not needed.
Transferring data ownership and nulling works best, for PS where we
always copy the string from the result set, unlike the text protocol.
|
| |
|
|
|
|
|
|
|
|
| |
function was called, which however, doesn't respect that during store the
raw data is not unpacked, to be lazy. The data is unpacked to zvals later,
during every row fetch. However, this way max_length won't be calculated
correctly. So, if a mysqlnd_fetch_field(_direct) call comes we need to
unpack everything and then calculate max_length...and that is expensive,
defies our lazy unpacking optimisation.
|
| |
|
|
|
|
|
|
| |
- UG(unicode) checks
- Changed:
- ZEND_STR_TYPE -> IS_UNICODE
- convert_to_text -> convert_to_unicode
|
| | |
|
| | |
|
| |
|
|
|
| |
as well as uint->unsigned int
|
| |
|
|
|
|
|
| |
- fixes to sprintf modifiers, cleaning warnings
- use _t types, like uint64_t instead of uint64, thus skipping series of
typedefs.
|
| |
|
|
|
|
|
|
| |
allocated compared to before. Also grow (realloc) the rset with 10% instead
of 33% - more reallocs but better memory usage. Of course later theres is a
realloc to shrink the rset t ofree it from unused rows but its better to
to eat too much at once.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Add mysqli_stmt_more_result()/mysqli_stmt_next_result(), but only in
mysqlnd builds as libmysql doesn't support this feature.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
We need to clone them, if there will be a transformation (convert_to_xxx)
which will change the origin.
- Make mysqlnd more compatible to libmysql, in this case if the execute of
a statement fails set the state of the statement back to PREPARED
- A test case to check the case of a failing statement.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
some config.w32 fixes
moved mysqlnd's block allocator to a separate file and also now
it's part of the connection, no MT problems.
|
| |
|
|
|
|
|
|
|
|
| |
Clearly separated fetching (physical reading) from decoding phases (data
interpretation). Threaded fetching added but disabled as needs more work for
Windows. For Linux needs some touches to add pthreads if this is enabled,
probably with a compile-time switch.
The code reorganisation makes it easy to add also async API, similar to
cURL's one.
|