While there are benefits to that approach, I'm glad they didn't do things that way. Putting performance issues aside, I think it's a lot easier to develop, test and maintain a small single purpose language then a cross-platform library with a public API and suite of bindings to an evolving language like Python.
Given the size of its team, DTrace would never have succeeded as a
library with bindings to even one dynamic language like Python because
the support burden of doing that is enormous. Developing, maintaining
and testing dynamic language bindings on multiple platforms has got to
be one of the suckiest programming tasks in the industry. Just ask
anyone who has ever had to support a PyQT or PyWx application on
multiple platforms. There's a good reason why Python's only stable
cross-platform gui is the one whose underlying gui library has been
dead for a decade.
Maintaining cross-platform bindings to a popular dynamic language like
Python, Ruby or Perl is like fighting a war on three fronts. You have
to expose the basic features of the library in a useful way while the
os vendors are breaking things in their zeal to support new features
and and hardware while at the same time the dynamic language
developers languages are "improving" their languages in significant
semantic ways.
In contrast the DTrace developers and porters only need to worry about
kernel internals - the D language is trivial to maintain and nobody
who matters in that community wants it to be anything other than what
it is: a tool for admins to find out who or what to blame when
something isn't working as it should. The people who love DTrace the
most and are willing to part with their cash for it aren't programmers
or hackers, they're DBAs and storage guys who get wet dreams about
stability, uptime and security who don't tolerate willy-nilly changes
to things that work for them and worrying that D will evolve too quickly
is like worrying about the something like gdb command set will get out
of control.