[PD] pix_openni crash Pd

Jack jack at rybn.org
Fri Feb 24 19:15:50 CET 2012


Le 24/02/2012 18:41, Mathieu Bouchard a écrit :
> Le 2012-02-24 à 18:10:00, Jack a écrit :
>
>> Here the output with valgrind when i create the gemwin :
>
> Are there any « Invalid write » messages before getting there ?
>
> Also note that GEM 93 and GEM 92 are quite binary-incompatible, 
> therefore an external has to be compiled with the right set of .h files.
>
> There were also at least two more intermediate steps for those who 
> used SVN versions of GEM 93. For example, GridFlow supports GEM 92 and 
> two early kinds of GEM 93 but doesn't work with the final GEM 93.
>
> I'm talking about this because :
>
>> Address 0x735ef68 is 0 bytes after a block of size 32 alloc'd
>> at 0x4025315: calloc (vg_replace_malloc.c:467)
>> by 0x80B8710: getbytes (in /usr/bin/pd)
>
> Looks like an object has an unexpected size, which hints at possible 
> mismatching struct{} definitions.
>
>  ______________________________________________________________________
> | Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC


Here the output of valgrind before i create the gemwin, it seems there 
is no trace of 'invalid write' :

==2417== HEAP SUMMARY:
==2417==     in use at exit: 10,269,451 bytes in 14,507 blocks
==2417==   total heap usage: 44,542 allocs, 30,035 frees, 46,723,012 
bytes allocated
==2417==
==2417== LEAK SUMMARY:
==2417==    definitely lost: 14,663 bytes in 50 blocks
==2417==    indirectly lost: 8,291 bytes in 488 blocks
==2417==      possibly lost: 1,525 bytes in 56 blocks
==2417==    still reachable: 10,244,972 bytes in 13,913 blocks
==2417==         suppressed: 0 bytes in 0 blocks
==2417== Rerun with --leak-check=full to see details of leaked memory
==2417==
==2417== For counts of detected and suppressed errors, rerun with: -v
==2417== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 482 from 11)
==2418==
==2418== HEAP SUMMARY:
==2418==     in use at exit: 10,269,451 bytes in 14,507 blocks
==2418==   total heap usage: 44,542 allocs, 30,035 frees, 46,723,012 
bytes allocated
==2418==
==2418== LEAK SUMMARY:
==2418==    definitely lost: 14,663 bytes in 50 blocks
==2418==    indirectly lost: 8,291 bytes in 488 blocks
==2418==      possibly lost: 1,525 bytes in 56 blocks
==2418==    still reachable: 10,244,972 bytes in 13,913 blocks
==2418==         suppressed: 0 bytes in 0 blocks
==2418== Rerun with --leak-check=full to see details of leaked memory
==2418==
==2418== For counts of detected and suppressed errors, rerun with: -v
==2418== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 482 from 11)
==2419==
==2419== HEAP SUMMARY:
==2419==     in use at exit: 10,269,451 bytes in 14,507 blocks
==2419==   total heap usage: 44,542 allocs, 30,035 frees, 46,723,012 
bytes allocated
==2419==
==2419== LEAK SUMMARY:
==2419==    definitely lost: 14,663 bytes in 50 blocks
==2419==    indirectly lost: 8,291 bytes in 488 blocks
==2419==      possibly lost: 1,525 bytes in 56 blocks
==2419==    still reachable: 10,244,972 bytes in 13,913 blocks
==2419==         suppressed: 0 bytes in 0 blocks
==2419== Rerun with --leak-check=full to see details of leaked memory
==2419==
==2419== For counts of detected and suppressed errors, rerun with: -v
==2419== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 482 from 11)
==2420==
==2420== HEAP SUMMARY:
==2420==     in use at exit: 10,269,451 bytes in 14,507 blocks
==2420==   total heap usage: 44,542 allocs, 30,035 frees, 46,723,012 
bytes allocated
==2420==
==2420== LEAK SUMMARY:
==2420==    definitely lost: 14,663 bytes in 50 blocks
==2420==    indirectly lost: 8,291 bytes in 488 blocks
==2420==      possibly lost: 1,525 bytes in 56 blocks
==2420==    still reachable: 10,244,972 bytes in 13,913 blocks
==2420==         suppressed: 0 bytes in 0 blocks
==2420== Rerun with --leak-check=full to see details of leaked memory
==2420==
==2420== For counts of detected and suppressed errors, rerun with: -v
==2420== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 482 from 11)

Thanx for your help.
++

Jack





More information about the Pd-list mailing list