![]() Doing the maths it seems that the blob takes more and more space during the operation to arrive at 1Gb. As I take it changing that is a rather large chunk of work so that we are going to have to live a while longer with on disc operations.īut what about the point that Mark brought up. So optimum solution imho would be to do it on disc like with the temp files that are created when the sort buffer runs out of space. One cannot do that in memory because of the huge size a blob could have. ![]() I understand Adriano that evidently all of the blob operations are done not in memory but on disc. As I understand Adriano this is also not "as designed" and we should see about "won't fix". What feature would be requested here? "Add characters to a blob without using 1000 times the necessary space"? Sorry, Sean, not marking this as a bug but as a feature request would need some more explanation. I agree with Mark that this surely cannot be a feature request. When you run the procedure another time using the SQL-command EXECUTE PROCEDURE P_TEST('This is a test sentence, which is going to be concatenated.', 4000) the file size increases again. It makes no difference, if the transaction is committed or not. SELECT BLOBTEXT FROM P_TEST('This is a test sentence, which is going to be concatenated.', 4000) Īfter the procedure has been executed, the disk space used by the database file increases to over 1 GB. GRANT EXECUTE ON PROCEDURE P_TEST TO SYSDBA GRANT EXECUTE ON PROCEDURE P_TEST TO "PUBLIC" The issue can be reproduced as described below:īLOBTEXT BLOB SUB_TYPE 0 SEGMENT SIZE 80) The only way to get the file size back to normal is a backup and restore of the database. When a blob is being concatenated mulitple times, the disk space for the database file grows massively.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |