The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.
# vim: ts=8 sw=2 sts=0 noexpandtab:
# $Id: HACKING 687 2009-03-03 21:34:18Z tim.bunce $

HACKING Devel::NYTProf
======================

We encourage hacking Devel::NYTProf!

OBTAINING THE CURRENT RELEASE
-----------------------------
The current official release can be obtained from CPAN
http://search.cpan.org/dist/Devel-NYTProf/

OBTAINING THE LATEST DEVELOPMENT CODE
-------------------------------------
You can grab the head of the latest trunk code from the Google Code repository, see
http://code.google.com/p/perl-devel-nytprof/source/checkout

CONTRIBUTING
------------
Please work with the latest code from the repository - see above.

Small patches can be uploaded via the issue tracker at
http://code.google.com/p/perl-devel-nytprof/issues/list

For larger changes please talk to us first via the mailing list at
http://code.google.com/p/perl-devel-nytprof/source/checkout

When developing, please ensure that no new compiler warnings are output.

TESTING
-------
You MUST write test cases for your changes. All tests that are dropped into the
"t" folder will be executed. (Remember to add them to MANIFEST.)  The testing system is
customized for this module because profilers are not that easy to test.
The system still uses Test::Harness and Test::More, so it should behave just
like any other perl modules 'make test'.

Writing tests is easy!

1) Design a perl script that will trigger the new behavior/feature that you
   want to test. Name the file 't/test##-description.p'

2) Create an empty 'reference' file for the test.
   Name the file 't/test##-description.rdt'
   When the test is run you'll get an error and a diff and you'll
   find a t/test##-description.rdt.new file waiting for you.
   If, and only if, the contents of that file are correct, then rename
   it to t/test##-description.rdt and you're done!
   Of course working out if the contents are correct can be
   non-trivial, but at least you don't have to write the file :)

3) Create a corresponding CSV output file if appropriate.
   You can use the same trick of creating an empty file, but this
   time with a .x suffix: t/test##-description.x
   You still need to verify the .x.new file of course!

4) Create a test script like this:

     use strict;
     use Test::More;
     use lib qw(t/lib);
     use NYTProfTest;

     run_test_group;

   You can add additional tests as parameters to run_test_group:

     run_test_group(2 => sub {
         my ($profile, $env) = @_;
         is $profile->foo, 'bar', "...";
         is $profile->baz, 'bax', "...";
     });


Note:  While writing a test, it is helpful to be able to run it directly, 
without the test harness.  This allows you to view more output stdout and 
stderr.  Fortunately, its easy to do:

  perl -Mblib -MDevel::NYTProf t/test01.p

The output will be in the ./nytprof.out file.  You can then also run the
csv manually:

  perl -Mblib bin/nytprofcsv

The final file will be in ./nytprof/test01.p.csv

Remember, testing is VERY VERY important!  Within a day or two of releasing
code, the CPAN testers will test the release on pretty much every major platform
you can think of.  A failed test report is much easier to fix than a runtime
error like "bash: segmentation fault: core dumped"

GENERATING DISTRIBUTIONS
------------------------
Releases are generated with 'make metafile', and then fed through tar+gz.
You shouldn't ever check-in the distribution directory, any temporary files
(including Makefile.old) or change the $VERSION numbers. We'll do this for you.

RESOURCES
---------
Google Code:
http://code.google.com/p/perl-devel-nytprof/

Google Devel Group (must subscribe here):
http://groups.google.com/group/develnytprof-dev

NYTimes Open Code Blog:
http://open.nytimes.com/

TODO (unsorted, unprioritized, unconsidered, even unreasonable and daft :)
----

*** For build/test

Add (very) basic nytprofhtml test (ie it runs and produces output)

Add tests for evals in regex: s/.../ ...perl code... /e

Add tests for -block and -sub csv reports.

Add tests with various kinds of blocks and loops (if, do, while, until, etc).

Add mechanism to specify options inside the .p file, such as
  # NYTPROF=...
though this may not be needed if t/20.runtests.t gets dropped
and the logic moved to a library for traditional t/*.t files to use.

Add mechanism to specify inside the .p file that NYTProf
should not be loaded via the command line. That's needed to test
behaviors in environments where perl is init'd first. Such as mod_perl.
Then we can test things like not having the sub line range for some subs.

*** For core only

Store raw NYTPROF option string in the data file. 
Include parsed version in report index page.

Add actual size and mtime of fid to data file. (Already in data file as zero,
just needs the stat() call.) Don't alter errno.

Generalize the concepts of clocks. Have a structure defining a 'clock' with
pointers to functions to get the time, subtract times to get ticks, return
the resolution etc. Give them names and attributes (cpu, realtime etc).
User could then pick a clock by name. By default we'd pick the best available
realtime clock (or best available cputime clock if usecputime=1 option set).

Add help option which would print a summary of the options and exit.
Could also print list of available clocks for the clock=N option
(using a set of #ifdef's)

Slow builtins, eg those that make system calls or are otherwise expensive, like
crypt, could be treated as calls to xsubs in the CORE:: namespace.
Or perhaps more usefully as xsubs in the current package.

Replace DB::enable_profiling() and DB::disable_profiling() with $DB::profile = 1|0;
That a more consistent API with $DB::single etc., but more importantly it lets
users leave the code in place when NYTProf is not loaded. It'll just do nothing,
whereas currently the user will get a fatal error if NYTProf isn't loaded.
It also allows smart things like use of local() for temporary overrides.

Combine current profile_* globals into a single global int using bit fields.
That way assigning to $DB::profile can offer a finer degree of control.
Specifically to enable/disable the sub or statement profiler separately.

Add mechanism to enable control of profiling on a per-sub-name and/or
per-package-name basis. For example, specify a regex and whenever a sub is
entered (for the first time, to make it cheap) check if the sub name matches
the regex. If it does then save the current $DB::profile value and set a new one.
When the sub exits restore the previous $DB::profile value.

Could optionally track resource usage per sub. Data sources could be perl sv
arenas (clone visit() function from sv.c) to measure number of SVs & total SV
memory, plus getrusage()). Abstract those into a structure with functions to
subtract the difference. Then use the same logic to get inclusive and exclusive
values as we use for inclusive and exclusive subroutine times.
Also possibly track the memory allocated to lexical pad SVs
(for given sub at given depth).

Work around OP_UNSTACK bug (http://rt.perl.org/rt3/Ticket/Display.html?id=60954)
  while ( foo() ) {  # all calls to foo should be from here
      ...
      ... # no calls to foo() should appear here
  }

*** For core and reports

Add NYTP_SIi_* constants for ::SubInfo array.

Add @INC to data file so reports can be made more readable by removing
(possibly very long) library paths where appropriate.
Tricky thing is that @INC can change during the life of the program.
One approach might be to output it whenever we assign a new fid
but only if different to the last @INC that was ouput.

Add marker with timestamp for phases BEGIN, CHECK, INIT, END
(could combine with pid marker)
Add marker with timestamp for enable_profile and disable_profile.
The goals here are to
a) know how long the different phases of execution took mostly for general interest, and
b) know how much time was spent with the profiler enabled to calculate accurate
percentages and also be able to spot 'leaks' in the data processing (e.g. if
the sum of the statement times don't match the time spent with the profiler
enabled, due to nested string evals for example).

Could save 'current subname' in sub profiler so we can say A was called by B
and not just A was called by line X of file Y. (Will need to SAVE* a link to
previous current subname and restore it on return from sub.)
This would free us from the perils of trying to guess the calling sub from the
line numbers (which is risky normally but is pure FAIL for Moose/Class::MOP).

*** For reports only

::Reader and its data structures need to be refactored to death.
The whole reporting framework needs a rewrite to use a single 'thin' command
line and classes for the Model (lines, files, subs), View (html, csv etc),
and Controller (composing views to form reports).
Dependent on a richer data model.

Then rework bin/ntyprof* to use the new subclasses
Ideally end up with a single nytprof command that just sets up the appropriate
classes to do the work.

Add way to merge profile data. Merging could be done in perl.

Trim leading @INC portion from filename in __ANON__[/very/long/path/...]
in report output. (Keep full path in link/tooltip/title as it may be ambiguous when shortened).

Add help link in reports. Could go to docs page on search.cpan.org.

Add % of total time to file table on index page.
To do these we need accurate total time - based on sum of times between enable_profile()
and disable_profile().

Add a 'permalink' icon (eg infinity symbol) to the right of lines that define
subs to make it easer to email/IM links to particular places in the code.

Report could track which subs it has reported caller info for
and so be able to identify subs that were called but haven't been included
in the report because we didn't know where the sub was.
They could them be included in a separate 'miscellaneous' page.
This is a more general way to view the problem of xsubs in packages
for which we don't have any perl source code.

*** Other, less important random, unsorted, and possibly daft ideas

Intercept all opcodes that may fork and run perl code in the child
  ie fork, open, entersub (ie xs), others?
  and fflush before executing the op (so fpurge isn't strictly required)
  and reinit_if_forked() afterwards
  add option to force reinit_if_forked check per stmt just-in-case
Alternatively it might be better to use pthread_atfork() [if available] with a
child handler. The man page says "Remember: only async-cancel-safe functions
are allowed on the child side of fork()" so it seems that the safe thing to do
is to use a volatile flag variable, and change its value in the handler to
signal to the main code.

Support profiling programs which use threads:
  - move all relevant globals into a structure
  - add lock around output to file

Set options via import so perl -d:NYTProf=... works. Very handy. May need
alternative option syntax. Also perl gives special meaning to 't' option
(threads) so we should reserve the same for eventual thread support.
Problem with this is that the import() call happens after init_profiler()
so limits the usefulness. So we'd need to limit it to certain options
(trace would certainly be useful).

Add resolution of __ANON__ sub names (eg imported 'constants') where possible.

Currently the line of only last BEGIN (or 'use') in the file are recorded.
Rename Foo::BEGIN subs to Foo::BEGIN[file:line]
(which matches the style used for Foo::__AUTO__[file:line])
Probably need to record or output the line range when the BEGIN 'sub' is entered.
Same for END subs.

Record $AUTOLOAD when AUTOLOAD() called. Perhaps as ...::AUTOLOAD[$AUTOLOAD]
Or perhaps just use the original name if the 'resolved' one is AUTOLOAD.
Could be argued either way.

More generally, consider the problem of code where one code path is fast 
and just sets $sql = ... (for example) and another code path executes the
sql. Some $sql may be fast and others slow. The profile can't separate the
timings based on what was in $sql because the code path was the same in both
cases. (For sql DBI::Profile can be used, but the underlying issue is general.)

Refactor this HACKING file!

The data file includes the information mapping a line-level line to the
corresponding block-level and sub-level lines. This should be added to the data
structure. It would enable a much richer visualization of which lines have
contributed to the 'rolled up' counts. That's especially tricky to work out
with the block level view.

Following on from that I have a totally crazy idea that the browsers css engine
could be used to highlight the corresponding rollup line when hovering over a
source line, and/or the opposite. Needs lots of thought, but it's an interesting idea.

Investigate and fix "Unable to determine line number" cases. Here's one:

  $ NYTPROF=begin=1:blocks=1:trace=1 perl  -d:NYTProf -Mstrict -e 1
  ...
  New fid  1 (after  0:1   ): -e /Users/timbo/perl/mods/nytprof-trunk/-e
  New fid  2 (after  1:3   ): /usr/local/perl58-i/lib/5.8.6/strict.pm 
  at 3: EVAL in different file (-e, /usr/local/perl58-i/lib/5.8.6/strict.pm) at /usr/local/perl58-i/lib/5.8.6/strict.pm line 3.
  at 5: EVAL in different file (-e, /usr/local/perl58-i/lib/5.8.6/strict.pm) at /usr/local/perl58-i/lib/5.8.6/strict.pm line 5.
  at 25: EVAL in different file (-e, /usr/local/perl58-i/lib/5.8.6/strict.pm) at /usr/local/perl58-i/lib/5.8.6/strict.pm line 25.
  at 37: EVAL in different file (-e, /usr/local/perl58-i/lib/5.8.6/strict.pm) at /usr/local/perl58-i/lib/5.8.6/strict.pm line 37.
  Unable to determine line number in -e.
  Unable to determine line number in -e.

Change from tracing via warn() to use our own function that, at least initially,
calls warn() while temporarily disabling the __WARN__ hook.

Profile and optimize report generation

The sub_caller information is currently one level deep. It would be good to
make it two levels. Especially because it would allow you to "see through"
AUTOLOADs and other kinds of 'dispatch' subs.

Currently goto isn't explicitly noticed by the sub profiler. Need to intercept pp_goto.
But that may be non-trivial. Could make it look like the statement that called
the sub that called goto also called the sub that goto went to, or make it look
like the goto &$sub made the call (but we'd then get the wrong inclusive time,
probably).

Bug or limitation?: sub calls in a continue { ... } block of a while () get
associated with the 'next;' within the loop. Fixed by perl change 33710?

Investigate style.css problem when using --outfile=some/other/dir

Add option to set processor affinity.

Index should show eval fids in some form - collapsed per location?
Or just included in the stats for the outer source file.

Sub profiler should avoid sv_setpvf(subname_sv, "%s::%s", stash_name, GvNAME(gv));
because it's expensive (Perl_sv_setpvf_nocontext accounts for 29% of pp_entersub_profiler).
Use a two level hash: HvNAME(GvSTASH(gv)) then GvNAME(gv).
Should then also be able to avoid newSV/free for subname_sv (which accounts for 50% of its time).

Class::MOP should update %DB::sub (if $^P & 0x10 set) when it creates methods.
Sub::Name should do same (extracting the file and line from the ANON[...:...])

Add refs so a string eval fid can be related to its 'siblings' (other string
eval fids from the same line in the 'parent' fid).

Profile should report _both_ the 'raw original' filename (possibly relative)
used by the application being profiled, plus an absolute filename determined
ASAP (to avoid problems with scripts that chdir).

Add (very) basic nytprofhtml test (ie it runs and produces output) so we check the VERSION has been updated.

called by list: "by $subname line $line of $file" make the file not include the inc portion

Add caller sub name (via $profile->subname_at_file_line($fid, $line)) to sub
caller info so report_src_line() doesn't have to do the expensive lookup.
It's also a useful step on the road to the profiler storing the calling sub's
name when generating the profile.

Monitor and report when method cache is invalidated. Watch generation number
and output a tag when it changes. Report locations of changes. Highlight those
that happen after INIT phase.