On 12/7/10 10:47 AM, Leo Butler wrote:
> Update of /cvsroot/maxima/maxima/doc/info
> In directory sfp-cvsdas-4.v30.ch3.sourceforge.com:/tmp/cvs-serv22245
>
> Modified Files:
> build_index.pl
> Log Message:
> Adds POD.
> Makes get_nodes_offset call specify the two regexps, rather than relying
> on file-scoped globals.
>
Thanks for checking in this version. However, when I build the German
docs, I now get errors.
Building produces messages like:
Use of uninitialized value in print at ../build_index.pl line 473, <FH>
line 187447.
And the resulting maxima-index.lisp is malformed. Look at the entry for
additive. The offset field is missing (or the length field). The
offset of the entry for expand is much better, but I'm still not given
the correct entry. I get the documentation for distribute_over instead.
Ray
> Index: build_index.pl
> ===================================================================
> RCS file: /cvsroot/maxima/maxima/doc/info/build_index.pl,v
> retrieving revision 1.7
> retrieving revision 1.8
> diff -u -d -r1.7 -r1.8
> --- build_index.pl 7 Dec 2010 14:20:26 -0000 1.7
> +++ build_index.pl 7 Dec 2010 15:47:35 -0000 1.8
> @@ -187,7 +187,7 @@
> {
> if (not $dump_pl_in) {
> get_info_files \%info_files,$main_info;
> - get_node_offsets \%info_files,$separator;
> + get_node_offsets \%info_files,$separator_re,$file_node_re;
> get_index_topics \%info_files,$file_node_item_re,$menu_item_re,$appendix_re;
> get_byte_offsets_for_topics \%info_files,$info_item_re;
> get_node_items_and_offsets \%info_files,$node_item_re,$node_title_re;
> @@ -212,6 +212,21 @@
> return $contents;
> }
>
> +=pod
> +
> +=over 5
> +
> +=item B<get_info_files( \%info_files, $main_info )>
> +
> +Slurps the main info file named by C<$main_info>, splits it at \037
> +bytes, extracts the other info names, then slurps these remaining
> +files. All contents are stored in C<%info_files>
> +
> +=back
> +
> +
> +=cut
> +
> sub get_info_files(@)
> {
> my ($info_files,$main_info)=@_;
> @@ -227,9 +242,23 @@
> return %$info_files;
> }
>
> +=pod
> +
> +=over 5
> +
> +=item B<get_node_offsets( \%info_files, $separator_re, $file_node_re )>
> +
> +Looks for occurences of $separator, records its position as an offset,
> +and then records occurences of info Nodes. Morally, C<$separator_re> is
> +\037.
> +
> +=back
> +
> +=cut
> +
> sub get_node_offsets(@)
> {
> - my ($info_files,$separator)=@_;
> + my ($info_files,$separator_re,$file_node_re)=@_;
> my @filenames=@{$info_files->{'filenames'}};
> #print Dumper(@filenames);
> my ($node_name,$offset);
> @@ -249,25 +278,13 @@
>
> =pod
>
> -=over
> -
> -=item *
> -
> -get_index_topics( $info_files,$file_node_regexp,$menu_item_regexp )
> -
> -=over
> -
> -=item - $info_files is a hash reference containing the info filenames and
> -the file contents.
> -
> -=item - $file_node_regexp is a regexp that identifies each appendix item.
> -
> -=item - $menu_item_regexp is a regexp that identifies breaks in the
> -info file.
> +=over 5
>
> -=back
> +=item B<get_index_topics( \%info_files,$file_node_re,$menu_item_re,$appendix_re )>
>
> -Scans the final info file for appendix nodes.
> +Scans the final info file for appendix/topic nodes. The node name and
> +line offset are indexed by the topic name in the 'topics' hash inside
> +C<%info_files>.
>
> =back
>
> @@ -296,6 +313,21 @@
> return %$info_files;
> }
>
> +=pod
> +
> +=over 5
> +
> +=item B<get_byte_offsets_for_topics( \%info_files,$info_item_re )>
> +
> +For each topic in 'topics', we determine the byte offset and length of
> +that topic item. This requires numerous reads from the info files,
> +which we accomplish by opening their contents in the C<%info_files>
> +hash as filehandles.
> +
> +=back
> +
> +=cut
> +
> sub get_byte_offsets_for_topics(@)
> {
> my ($info_files,$info_item_re)=@_;
> @@ -355,6 +387,22 @@
> return %$info_files;
> }
>
> +=pod
> +
> +=over 5
> +
> +=item B<get_node_items_and_offsets( \%info_files,$node_item_re,$node_title_re )>
> +
> +Scans each info file for info items and computes their byte offsets
> +and lengths. Again, we open the info file contents in memory (as a
> +character string/file), and use the C<tell> function to give the byte
> +offset of each item. The results are stored in the 'items' hash in
> +C<%info_files>.
> +
> +=back
> +
> +=cut
> +
> sub get_node_items_and_offsets(@)
> {
> my ($info_files,$node_item_re,$node_title_re)=@_;
> @@ -394,6 +442,21 @@
> return %$info_files;
> }
>
> +=pod
> +
> +=over 5
> +
> +=item B<write_lisp_code( \%info_files )>
> +
> +The 'topics' hash in C<%info_files> is written out to create the
> +C<*info-deffn-defvr-pairs*> Lisp list. The 'items' hash in
> +C<%info_files> is written out to create the C<*info-section-pairs*>
> +Lisp list.
> +
> +=back
> +
> +=cut
> +
> sub write_lisp_code($)
> {
> my $info_files=shift;
>
>
> ------------------------------------------------------------------------------
> What happens now with your Lotus Notes apps - do you make another costly
> upgrade, or settle for being marooned without product support? Time to move
> off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
> use, and manage than apps on traditional platforms. Sign up for the Lotus
> Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
> _______________________________________________
> Maxima-commits mailing list
> Maxima-commits at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/maxima-commits
>