Related articles |
---|
[4 earlier articles] |
Re: binary search debugging of compilers tkoenig@netcologne.de (Thomas Koenig) (2023-05-14) |
Re: binary search debugging of compilers gah4@u.washington.edu (gah4) (2023-05-14) |
Re: binary search debugging of compilers cameron.mcinally@nyu.edu (Cameron McInally) (2023-05-14) |
Re: binary search debugging of compilers 864-117-4973@kylheku.com (Kaz Kylheku) (2023-05-15) |
Re: binary search debugging of compilers 864-117-4973@kylheku.com (Kaz Kylheku) (2023-05-15) |
Re: binary search debugging of compilers gah4@u.washington.edu (gah4) (2023-05-16) |
Re: binary search debugging of compilers rsc@swtch.com (Russ Cox) (2023-05-17) |
Re: binary search debugging of compilers cameron.mcinally@nyu.edu (Cameron McInally) (2023-05-17) |
Re: binary search debugging of compilers 864-117-4973@kylheku.com (Kaz Kylheku) (2023-05-17) |
Re: binary search debugging of compilers 864-117-4973@kylheku.com (Kaz Kylheku) (2023-05-17) |
Re: binary search debugging of compilers gah4@u.washington.edu (gah4) (2023-05-17) |
Re: binary search debugging of compilers spibou@gmail.com (Spiros Bousbouras) (2023-05-18) |
Re: binary search debugging of compilers rsc@swtch.com (Russ Cox) (2023-05-18) |
[7 later articles] |
From: | Russ Cox <rsc@swtch.com> |
Newsgroups: | comp.compilers |
Date: | Wed, 17 May 2023 10:55:27 -0400 |
Organization: | Compilers Central |
References: | <CADSkJJVN7RGqhgVDZaz_K5be6uEDaMofMu5OF3RR4Y5-fDu00Q@mail.gmail.com> |
Injection-Info: | gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="22313"; mail-complaints-to="abuse@iecc.com" |
Keywords: | debug, tools |
Posted-Date: | 17 May 2023 13:13:32 EDT |
In-Reply-To: | <CADSkJJVN7RGqhgVDZaz_K5be6uEDaMofMu5OF3RR4Y5-fDu00Q@mail.gmail.com> |
Thanks to everyone for the replies and pointers.
As far as 'git bisect' is concerned, we have found 'git bisect'
incredibly useful too, of course. It is clearly another instance of
binary search to find bugs, but it is binary search over time, while
the system I described is binary search over the program being
compiled. "What change broke the compiler?" vs "What code does that
change miscompile?". The two are independent and complementary.
I had originally not mentioned binary search over time because it
seemed non-compiler-specific, but as Cameron McInally pointed out,
one of its independent inventions was at Cray Research in the mid-1990s
in a compiler context [1]. I reached out to Brian Ness to find out more
of the history of that work, and I've included his response below,
with permission. The other independent invention I know of was by
Larry McVoy, also in the mid-1990s, in the context of Linux kernel
development [2]. Linux did not have any kind of whole-tree source
control snapshots, but they did post frequent enough manual snapshots
to make the technique applicable. I also reached out to Larry McVoy for
more of the history of that work, and I've included his response below,
also with permission. That branch of the history led directly to
'git bisect' [3].
In general, binary search for narrowing a problem being debugged was
obviously well known by the mid-1990s. At that time, whole snapshot
source history was becoming common enough that the idea of binary
search over it was inevitable. I expect there are other instances of
independent invention we don't know about.
Best,
Russ
[1] https://ieeexplore.ieee.org/document/625082
[2] https://elixir.bootlin.com/linux/1.3.73/source/Documentation/BUG-HUNTING
[3] https://groups.google.com/g/fa.linux.kernel/c/cp6abJnEN5U/m/5Z5s14LkzR4J
---------- Forwarded message ---------
From: Russ Cox <rsc@swtch.com>
Date: Mon, May 15, 2023 at 1:18 PM
Subject: history of source control binary search for debugging
Hi Larry and Brian,
I am interested in the origin of the idea of binary search over source
control history running tests to identify the change that introduced a
given failure. I have traced it back as far as the mid-1990s with
Larry's Documentation/BUG-HUNTING file in the Linux kernel
(March 1996) and Brian's COMPSAC'97 paper with Viet Ngo "Regression
containment through source change isolation" (August 1997).
(I have been unable to find Viet Ngo's email address or I would add
him to this thread.)
Can you share any background about when you first encountered the idea,
or any pointers to earlier systems or engineers I might reach out to?
Thanks very much.
Best,
Russ
---------- Forwarded message ---------
From: Larry McVoy <lm@mcvoy.com>
Date: Mon, May 15, 2023 at 1:57 PM
Subject: Re: history of source control binary search for debugging
Good to hear from you Russ.
I'm pretty sure I "invented" that idea, which means I hadn't seen it before.
All I was doing was using the fact that binary search is log(n). And giving
non-kernel people a way to do some grunt work and then get it close and
then hand that off to the kernel people.
But note that the BUG-HUNTING was using snapshot, not source management.
BitKeeper hadn't been invented yet and the other source management systems
sucked because they were reproducible only at tagged points. At least
that was true of CVS.
When BitKeeper came along, we added this:
--longest Restrict the deltas to those on the longest line
between the two range endpoints. Unlike a range, the
lower bound is included in the output.
because BitKeeper history is a lattice, not a straight line. So your
test points became
bk changes -nd:REV: > REVS
and binary search over those. That gets you the longest "straight" line
in the graph. I dunno if Git has something like that, maybe. Lots of
stuff got lost in the transition to Git, still chaps my hide to this
day.
---------- Forwarded message ---------
From: Brian Ness <bcn@bubblecreek.com>
Date: Mon, May 15, 2023 at 3:47 PM
Subject: Re: history of source control binary search for debugging
Hi Russ,
I haven’t been in contact with Viet since about 2005 or 2006, so I
don’t know his whereabouts now.
Viet and I worked together in the compiler development group at Cray.
There were language specific front-end teams for Fortan90 and C/C++,
an optimizer team, and a backend team. Viet and I both worked on the
optimizer team. Each team delivered its component as a set of
relocatables (.o files), identified by a component version string.
We had a tool for building the various compilers from the relocatable
sets, by simply specifying the desired version of each component in a
version string given as an input parameter to the build tool. There
were other options, such as for linking up debug versions of the
compilers. The build tool retrieved the relocatables from an archive
we had set up, and linked them together as an executable. Each night,
cron jobs would build compilers for testing from the most recent
versions of each component. When regressions were discovered, we would
build compilers with all previously validated components and execute
the newly failing tests. If a given test failed with a previously
validated compiler, we might have a test that had been modified or an
external library that had regressed, but when previously validated
compiler components passed the newly failing test, we would try one
new compiler component (e.g. the optimizer) with previously validated
other components. Since putting together the different combinations of
components was easy with our build tool, we could quickly identify
which new component introduced the regression.
The optimizer group did something similar, but with source code rather
than relocatables. Cray had its own version control system called USM
(Unicos Source Manager) which had support for atomic changes, meaning
that a set of related changes in multiple files could be encapsulated
and treated as a single thing. We called those atomic change sets,
“mods”. As far as I know, there were no other version control systems
supporting that at the time. Atomic change sets made it easy to
backtrack through the source code to identify the particular set of
changes causing a regression (or other behavior change). After proving
the viability of this approach with linear searches through the code,
we optimized it by using binary searches. This was important, because
an optimizer build from source could involve compiling something like
a million lines of code. Even on our supercomputers this could take a
while, so we wanted to do the minimum number of builds from source.
We called the process “mod isolation”.
After identifying all the mods causing regressions using our mod
isolation process, we would remove those mods and rebuild the
optimizer for the next round of testing. This was repeated until the
optimizer had no known regressions, and that version would be marked
as a release candidate. The “pulled” mods would be reworked by the
author and re-applied for a future release candidate.
This process allowed us to move from infrequent delivery of
non-regressing optimizers to one or two week delivery cycles.
I wasn’t the one to come up with the idea for mod isolation at Cray,
but I was the one who implemented it and automated much of the
process. The COMPSAC paper resulted from Viet’s Phd advisor being
hired by Cray to evaluate our development processes. During his
evaluation, he discovered our mod isolation process, and said he had
never seen this being done before. He asked us to write the paper.
I presented it at COMPSAC. We were not able to find much prior work on
this, probably because there wasn’t support for atomic changes in the
version control tools in use at that time, with the exception of USM.
Of course, now this functionality is available within git as “git
bisect”.
(I used “git bisect” just yesterday.)
I hope this helps you. Let me know if you want any other info.
-Brian
Return to the
comp.compilers page.
Search the
comp.compilers archives again.