Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Reducing Python's startup time (lwn.net)
219 points by vanni on Aug 30, 2017 | hide | past | favorite | 161 comments


To be clear. I never said that start-up time wasn't important. Instead, I pointed out that in the previously studied example, the measured impact on start-up time of a single named tuple was a fraction of a millisecond and less than 0.1% of start-up time.

At Guido's behest, I agreed to treat this miniscule component of start-up time as being important and will review and implement some optimizations in the sprints next week. I've agreed to do this even if it comes at the expense of code clarity, complexity, or maintainability. I expect that when I'm done, _source will be a lazily generated attribute and that much of the code currently in the template will be pre-computed. This seems reasonable to me.

FWIW, I've devoted 16 years of my life to optimizing Python and care a great deal about start-up time. I'm not happy about all the Raymond bashing going on in this thread.


Thank you Raymond for all your great work.

Regarding this post and related comments, I would like to say that, as an external observer who does not know the details of CPython development decision-making process, phrases such as "Okay, then Nick and I are overruled" and "At Guido's behest" sound not so good to my ears.

Maybe you and Guido are best friends, and you are half-joking with that tone, or maybe you are responsible for that part of code, so it is your duty to change it according to BDFL last words even if you do not agree with them... I do not know. Anyway I think it would be better, in the open source world, if a code change is developed by someone who thinks it is the right thing to do, or I could not see so much difference from commercial companies and proprietary software.

Hey, just my two cents. Keep rocking! :)


I liked those comments, and I thought they reflected well on Raymond. It's a sign of a healthy technical organization that contributors are willing to disagree and still heed decisions made by those higher in the decision-making chain.


Did the author contact you before publishing? The article does not exactly provide an unbiased version of the story. It tries to assign blame.


The author did not contact me.


I too would like to thank you for the great work you did on Python and CPython.

I did not follow the whole thread, but did not see any "Raymond bashing". Most people are well aware of the elegance and beauty you added to Python.


Raymond, thank you for your great work. All your contributions and talks have been inspiration for many.

I am sure Python community has much to gain having this arguments then just blindly follow any trend.


namedtuple is particularly weird. When you create a new type of namedtuple, Python creates the source code for a new class from a string template and passes it to exec() to build the class. It's a clever and relatively straightforward bit of metaprogramming, but it's not a suprise it's not fast. https://hg.python.org/cpython/file/b14308524cff/Lib/collecti...

(BTW if you need a quick and dirty way to make Python start up faster, see if you can use the -S flag for your application. It skips the site-specific initialization and has a significant impact on startup times.)


I'm curious, given the presence of a ".pyc", shouldn't these kinds of costs be paid primarily on first invocation "ever"?


Not in a general case, because eval/exec accepts an arbitrary string which could change at runtime.


No, because the string given to eval will not be stored in the pyc – it has to be compiled on every import.


It can't be cached? Is not caching a dependency issue?


It can't, because lol dynamic dispatch. Exercise: what does this code do?

    import collections
    collections.namedtuple = lambda _0, _1, _2=False, _3=False: "Where is your god now?"
    import b
b.py:

    import collections
    print(collections.namedtuple('Point', ['x', 'y']))
And this is not even particularly 'evil' code, in the sense of relying on internal implementation details; that's just your bread-and-butter Python language semantics. It prevents (or at least complicates) many other potential optimisations as well.


You should link to the GitHub repo instead of Hg. They put effort into the move.



Cool use of eval. I wonder if there are metaprogramming aimed libraries for generating eval'd code.


Why would you want that when you have actual metaprogramming on code-as-data in the standard library?


Is this a joke on the namedtuple trick or is there actually such a module in the standard library?


Not a joke, but "code as data" was hyperbolic.

You can do some pretty amazing metaprogramming in Python with metaclasses, and the inspect and ast modules give you access to the guts of the language.


Really? Huh. I would have expected a metaclass.


While using a metaclass is indeed cleaner, it's slower than generating code on the fly and exec-ing it.

This video talks about it: https://youtu.be/sPiWg5jSoZI?t=1h37m18s


I guess that's how Python 3.6's typing.NamedTuple works, which is frankly a much nicer syntax for named tuples.


It's not actually; typing.NamedTuple is just a wrapper that calls the original collections.namedtuple. It does provide some additional features, like default values and methods.


> It's a clever and relatively straightforward bit of metaprogramming

Wat? It's neither of those things.


Should we take an informal survey?


I'd been wondering why _source was in there. I'm a little surprised that Hettinger thinks it's so important to retain it. It's a neat demonstration of how to make a new data type, and I like that it's so straightforward, but I don't buy the argument that slowing down the interpreter is important for didactic reasons.

Edit: I'm talking about namedtuple, the implementation of which is slow because namedtuple() has to construct _source for every new namedtuple type. Which was clear from the article but not from my comment.


One purpose of _source is to make the namedtuple self-documenting -- that has been a considerable benefit over the years and has resulted in many fewer questions and issues than we usually get about our other tools.

Another purpose of _source is so that you can write it out to a module. This lets you pre-compute the named tuple for speed and it makes it possible to easily Cythonize it or to customize it (in particular, people like to take out the error checking).

Since the namedtuple() factory function was already creating the source string as part of its operation, just storing it and exposing the attribute had nearly zero additional cost.

That said, don't worry about it, next week I plan to make this a lazily generated attribute so you all can sleep soundly at night :-)


It's interesting + quite nice implementation. I've been using python for 10 years, and vaguely knew namedtuple used exec, but never knew there was a _source attribute.

I'm definitely looking forward to faster startup though.

I also didn't realise you can't pickle namedtuple by default, which probably cost me a few days on a project, don't think I tracked this down as the reason, but it makes sense now.


Thanks for the clarification! It makes a lot more sense to me now why _source would be part of namedtuple. I think the LWN article must have exaggerated how much of a penalty is being paid for it. Anyway, I appreciate your work on namedtuple -- I use it constantly.


Does this mean you're changing how Namedtuples are implemented, or just not saving _source?


Yea I'm not getting the need to keep _source in collections.namedtuple at all. For teaching purposes, you can just present the existing implementation...I don't see what purpose having that teaching code be in the core distribution serves?


In case you overlooked, Raymond H explained it in another thread[1].

[1] https://news.ycombinator.com/item?id=15135569


Not everyone is familiar with looking at the core source, or even reading the docs.


Actually your edit made it clearer from both. I had the feeling that namedtuply was constructing _source, but it wasn't obvious in the article either. So thanks for explaining, both your comment and the article.


Seems like this article is inflammatory and totally unnecessary.

It is written in a way to pin a blame on someone (Raymond H) and also looking at python-dev (and even on the github issue) the decision was made month earlier (July) to go with the optimization[1], and Raymond complied[2].

[1] https://mail.python.org/pipermail/python-dev/2017-July/14860... [2] https://mail.python.org/pipermail/python-dev/2017-July/14861...


I have grown to seriously dislike 'namedtuple'. It seems great, and it's so easy to use, but there's so few places it can be used without dragging in future backwards-compatibility baggage.

Let's say you have a light-weight object, like a "full name" which can be represented with a "forename" and an "surname". (Yes, there are many names which don't fit this category. This is a toy example.) So you write:

  from collections import namedtuple
  FullName = namedtuple("FullName", "forename surname")
  name = FullName("Edward", "Smith")
So simple! So clean! No "self.forename = forename" or need to make your own __repr__.

Except you also inherit the tuple behavior, because this isn't really light-weight object but a ... let's call it a heavy-weight tuple.

  >>> name[0]
  'Edward'
  >>> name[1]
  'Smith'
This mean people may (okay, are likely) do things like:

  forename, surname = name
or even

  for forename, surname in names:
    ...
Even if that's not what you want in your API. If you care about backwards compatibility, then if you replace the namedtuple with its own class then you'll have to re-implement __getitem__, _replace, __len__, and count - methods which make no sense for a simple name class.

Or, if you decided to change the API, to make it be:

  FullName = namedtuple("FullName", "forename middlename surname suffix")
then you'll again break code which used the full indexing API that you originally implicitly promised, simply by saying this is a namedtuple.

'namedtuple' has one good use - migration from a tuple object to a real object while allowing backwards compatability to the tuple API. For example, the old tuples once returned from os.stat() and time.gmtime().

It's also fine if it will only be used by code where you have full control over its use and can either prevent inappropriate API use like destructuring assignment, or can update all uses if you want to change the API. For example, a light-weight object used in a test-suite or internal helper function.

Otherwise, don't use it. Spend the extra few minutes so in the future you don't have to worry about backwards compatibility issues.


I disagree -- namedtuple has a crucial property that I've found incredibly useful. Namedtuples are immutable. This makes it impossible (well, not really, but hard) for a namedtuple to change its state without telling me about it, because if it changes state it's not the same namedtuple any more. I agree that namedtuple being a sequence type is problematic, but deciding not to use a lightweight immutable data structure because your colleagues might use it badly is throwing out the baby with the bathwater.


I was not speaking of in-house software were only "your colleagues might use it badly". In that case, you have access to the entire source code and aren't so worried about backwards compatibility.

I was referring to publishing a library which uses a namedtuple as part of the public API.

Your argument seems to be that there are good things about namedtuple - which I agree with - so therefore I shouldn't be so dismissive of it. That might be reasonable if there were no other easy way to construct a light-weight object with immutable attributes.

But the attr package exists, as js2 pointed out in a sibling branch. See http://www.attrs.org/en/stable/examples.html#immutability .


If you're writing a python library, you should already be aware that there's only _so much_ you can do to prevent the library's users from using it incorrectly or oddly - this is Python after all, there's no private/protected.

From that perspective, I don't see the problem with returning a namedtuple, if you document it and write your changelogs appropriately when/if you add a field.


We must assume people are being reasonable.

If someone uses a module where the top-level docstring says "not part of the public API", and which starts with an "_", then that person is not being reasonable, by well-established Python developer standards. I have no objections to breaking compatibility for people who reach into a private API.

It's different if I hand them an object and say "use it". Even if I don't document anything, it's implicit that this is part of the public API.

These are two different things, though yes, there can be ambiguity.

"if you document it and write your changelogs appropriately when/if you add a field."

I tried to point this out earlier. It takes only a few (boring) minutes to write the essentially boilerplate code.

While over the lifetime of the project it will likely take more time to write the supporting documentation and deal with the effects of breaking compatibility.


I just gave attr another look -- I bounced off it before because of the cutesy naming. It's a matter of taste, but I thought it was unlikely that a library that made such a grating decision would be fun to use. My second impression is that I've got to give attr points for clarity: the implementation is short and to the point, considering how much it monkeys around with Python's classes. I can see why you'd want to use it for an API, because it won't endow an object with unwanted Sequence methods. Not an issue in my current job, but your point is well taken.


Thanks!

Personally, I suck it up and write the boring boilerplate, because I prefer to avoid dependencies and most of the things I work on are low-level. But I'm special that way. ;)


I recommend avoiding the attrs module. You might save time writing, but you'll lose far more than that in reading time.


> Except you also inherit the tuple behavior.

It sounds like you might be happier with attrs if you want an easy to construct class that doesn't have tuple behavior:

http://www.attrs.org/

http://www.attrs.org/en/stable/why.html#namedtuples


I think you are missing the best aspects of namedtuple. The positional properties are there to ease interaction with other APIs. The vast majority of access should be through `.member`


No, I agree with eesmith: I've been bitten multiple times by bugs from namedtuple objects providing the tuple interface which I'd never intended to use. So I stopped using namedtuple in favor of a new more-restricted module I wrote to replace it.


Out of curiosity, what kind of bugs have you see that arose from "providing the tuple interface" inappropriately?


All I remember clearly still is the baffled frustration, like you feel when a C compiler optimizes out undefined behavior you didn't know about. I was using namedtuple in generally this kind of way: https://github.com/darius/tinyhiss/blob/master/terp.py#L90 (this was a trampolined interpreter for a dialect of Smalltalk).

One thing that could happen is, when you might expect to see a type error if you take one of these objects x and mistakenly write x[0], it'd just continue on with some field value. Or if you mistakenly try to iterate over x, you can. Or maybe you were generically walking a tree of objects and didn't intend for an instance of one of these classes to have children. The bafflement I ran into was weirder than these; maybe it had to do with the fact that inheritance was mixed in here too? And I think I'd defined `__getitem__` in some of these classes one of those times.

Sorry I can't remember more. To write https://github.com/darius/tailbiter I had to know quite a bit about Python, but it's a big language! It was a mistake to assume I could use namedtuple as just a convenient way to define a struct type without grokking all its details.


Python's JSON serializer treats NamedTuples as lists. Not wrong once you understand NamedTuples are tuples, but completely unintuitive.

  In [13]: from collections import namedtuple

  In [14]: import json

  In [15]: nt = namedtuple("Test", ["field1", "field2"])

  In [16]: sample = nt(field1="foo", field2="bar")

  In [17]: json.dumps(sample)
  Out[17]: '["foo", "bar"]'


I could see that.

If the majority of your use case just immutability and named member access, the tupitility of it doesn't matter much.

In that regard, we just need a lightweight way to specify immutable structs, which I would throw a tupleware party to support.

Maybe include a .to_tuple() method on your immutable structs


Only if one is misusing namedtuple: it should be used for iterable things, like csv lines, db rows, coordinates, etc. but not for a person. That makes no sense.

If what you need is a cheap, quick container, then you want types.SimpleNamespace.


I have never used namedtuple in the manner you describe. I always use it as an easy way to create what is essentially a new data type with a bunch of named fields. This is also what I understand to be the normal use case for it.

And I agree with the criticisms: if namedtuple is slow, but so useful that it's all over the place, perhaps we should extend the language to allow these kinds of easy-to-create data types.


"but not for a person. That makes no sense."

I wrote 'toy example' for a reason. While it does not make domain sense, it's easy to understand and it highlights the problems I wanted to show.

"it should be used for iterable things"

Really? This article concerned the use of namedtuple in

  functools.py
  21:from collections import namedtuple
  403:_CacheInfo = namedtuple("CacheInfo", ["hits", "misses", "maxsize", "currsize"])
The cache_info() method returns a _CacheInfo instance to the user. Where/what are the "iterable things"?

If a future version adds another attribute, it will break most code which uses destructuring assignment.

(For that matter, I can't see why it make sense to use a namedtuple for this case. Why not just:

  class _CacheInfo:
    def __init__(self, hits, misses, maxsize, currsize):
      self.hits = hits
      ...
? Who cares about immutability here?)

Or for another case, here's a Python function which returns a class derived from a namedtuple:

  >>> from urllib import parse
  >>> result = parse.urlparse("http://www.python.org/")
  >>> for term in result:
  ...   print(result)
  ...
  >>> for i, term in enumerate(result):
  ...   print(i, repr(term))
  ...
  0 'http'
  1 'www.python.org'
  2 '/'
  3 ''
  4 ''
  5 ''
Again, where is the "iterable thing"? I mean, the ParseResult instance is iterable, but my point is that it makes no sense to iterate over those fragments.

If the Python standard library gets it wrong, what is the right solution?

"If what you need is a cheap, quick container, then you want types.SimpleNamespace. "

You mean the one where the documentation at https://docs.python.org/3/library/types.html#types.SimpleNam... says "SimpleNamespace may be useful as a replacement for class NS: pass. However, for a structured record type use namedtuple() instead." ?

I do not think you are correct.


There are plenty of issues with the stdlib. Part of it don't respect the PEP8, there are inconsistencies with argument types and orders, some of it is even unusable for real life usages (ex: csv in python 2.7) and we keep it for legacy reasons.

The fact is, the use cases you are showing are wrong, and the one I'm talking about are the proper ones. It's ok to make mistakes, we all do a lot of them. And we will have to pay for those.

If you are interested in the subject, last month we had a huge debate on the Python mailing list about a way to have a proper official immutable but non interable structured record to replace named tuple and avoid those abuses. And also to maybe include something like attrs in the stdlib.


It would be nice if you could show me a proper example, as what you wrote doesn't match with my understanding of how it is used nor with how it should be used.

My argument is to resist the temptation to use namedtuple for anything except a migration from tuples to named attributes. While there is some overlap with what you wrote, they are not the same. I would like to know what I'm missing.


A proper use for namedtuple:

    @property
    def coordinates():
        x = self.calculateX()
        y = self.calculateY()
        return x, y
this could become:

    Point = namedtuple('Point', ('x', 'y'))

    @property
    def coordinates():
         x = self.calculateX()
         y = self.calculateY()
         return Point(x, y)
It stays compatible with the previous implementation. as you can still do:

    x, y = stuff.coordinates
But now can do:

    point = stuff.coordinates
    point.x
The same example apply for a row a from timeseries in a database or a line of stats in a csv.

But now imagine you have a response from a ldap server returning a list of authorized person. You could manipulate those persons in a dictionary. Some people may want to have a nicer way to access them, with the look up syntax. You can be tempted to use a namedtuple here, but that make no sense, as a person is not iterable. The alternative is to use empty class, but it's very verbose.

But now you can use types.SimpleNameSpace:

    >>> class Person(types.SimpleNamespace): pass
    >>> p = Person(name='foo', age=12) # no __init__ to write
    >>> p # free repr
    Person(age=12, name='foo')
    >>> p.age # nice access syntax
    12
The good thing is that if you ever need to extend Person to later have a more complex behavior, you can just do so as it's a regular class.

The bad things are:

- you may need something immutable

- you may want some checks done to restrict the attributes used

Which is why currently people abuse namedtuple. We don't have a good story for those. Hence the debate on the mailing list.

Nevertheless, it's good to remember that the Python community has a philosophy of "we are all consenting adults here" and a good track record for following it. immutability and attributes restrictions are not definitive show stoppers in Python, and SimpleNameSpace is a decent solution while waiting a purer to come up.


Your example with Point() is exactly what I said was a good use case, for apparently the same reasons why I said it was a good use case.

I said "'namedtuple' has one good use - migration from a tuple object to a real object while allowing backwards compatability to the tuple API. ... It's also fine if it will only be used by code where you have full control over its use and can either prevent inappropriate API use ... Otherwise, don't use it."

You replied "Only if one is misusing namedtuple".

I interpreted this as an objection to my use guidelines. From what I can tell, you are in complete agreement with me.


This is, in fact, the exact reason for the _source attribute which is being discussed in this article - you can use namedtuple to easily create a structure, get its source, and then edit it to suit your individual project needs. E.g.:

  >>> from collections import namedtuple
  >>> FullName = namedtuple("FullName", "forename surname")
  >>> print(FullName._source)
  from builtins import property as _property, tuple as _tuple
  from operator import itemgetter as _itemgetter
  from collections import OrderedDict

  class FullName(tuple):
      'FullName(forename, surname)'

      __slots__ = ()

      _fields = ('forename', 'surname')

      def __new__(_cls, forename, surname):
          'Create new instance of FullName(forename, surname)'
          return _tuple.__new__(_cls, (forename, surname))

      @classmethod
      def _make(cls, iterable, new=tuple.__new__, len=len):
          'Make a new FullName object from a sequence or iterable'
          result = new(cls, iterable)
          if len(result) != 2:
              raise TypeError('Expected 2 arguments, got %d' % len(result))
          return result

      def _replace(_self, **kwds):
          'Return a new FullName object replacing specified fields with new values'
          result = _self._make(map(kwds.pop, ('forename', 'surname'), _self))
          if kwds:
              raise ValueError('Got unexpected field names: %r' % list(kwds))
          return result

      def __repr__(self):
          'Return a nicely formatted representation string'
          return self.__class__.__name__ + '(forename=%r, surname=%r)' % self

      def _asdict(self):
          'Return a new OrderedDict which maps field names to their values.'
          return OrderedDict(zip(self._fields, self))

      def __getnewargs__(self):
          'Return self as a plain tuple.  Used by copy and pickle.'
          return tuple(self)

      forename = _property(_itemgetter(0), doc='Alias for field number 0')

      surname = _property(_itemgetter(1), doc='Alias for field number 1')
You can then pretty simply remove the inheritance from tuple and change the properties to get the behavior you describe.


I do not think it's simple to remove the __getitem__ lookup behavior from this template. To start with, forename depends on _itemgetter(0).

I might as well re-write it as:

  class FullName(object):
    __slots__ = ("_forename", "_surname") 
    def __init__(self, forename, surname):
       self._forename = forename
       self._surname = surname
    def __repr__(self):
       return "%s(forename=%r, surname=%r)" % (
         self.__class__.__name__, self._forename, self._surname)
    @property
    def forename(self):
      return self._forename
    @property
    def surname(self):
      return self._surname
That took about two minutes to write. How long does it take you to rewrite the _source so it doesn't support __getitem__ or __len__?


This is a nice touch but insisting that it should be part of the stock functionality for a basic data structure makes no sense to me.


This is Python, where lots of things are iterable/indexable. If for some reason you don't want an iterable/indexable object, then don't make one. But don't blame a tool other people use for the fact that it wasn't the tool you wanted it to be.


If you want an immutable object with a fixed set of fields - not an unreasonable request, mind you - the Python standard library doesn't give you much to work with. NamedTuples are the closest thing we have, but it's so clunky to use that it's not widely used.

It's incredibly frustrating, because this leads to external libraries abusing mutable objects everywhere, Django being one of the worse offenders.


I am pointing out a long-term negative with using namedtuple that is not obvious at first use, in hopes that others might gain some insight on when to use it and when not to use it.

Other people can and do use tools incorrectly. I use tools incorrectly. That doesn't mean we shouldn't learn from the mistakes of others.


I agree that namedtuples shouldn't be used if you don't actually want a tuple.


I concur :-)

Otherwise, use a dict, types.SimpleNamespace, python-attrs, or a custom class.

FWIW, the origin of namedtuple() was that some variation of it had been re-invented many times and this tool brought together the common features of those tools (substitutability for regular tuples, compact storage, usability as a dictionary key, named fields, a clear __repr__, etc.)


Nametuple is designed for backwards compatibility with tuples. It's to make an easy transition from using tuples as structs to using something with attributes, without breaking all the code you just demonstrated.


Yes, that was the "one good use" I mentioned.

However, in practice it's not used that way. This specific LWN article concerns the following code from functools.py:

  _CacheInfo = namedtuple("CacheInfo", ["hits", "misses", "maxsize", "currsize"])
It is used like this:

    def decorating_function(user_function):
        wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo)
     ...
    def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo):
     ...
    def cache_info():
        """Report cache statistics"""
        with lock:
            return _CacheInfo(hits, misses, maxsize, cache_len())
The _lru_cache_wrapper is then replaced by a C extension in _functoolsmodule.c which only uses the cache_info_type as:

  static PyObject *
  lru_cache_cache_info(lru_cache_object *self, PyObject *unused)
  {
      return PyObject_CallFunction(self->cache_info_type, "nnOn",
                                   self->hits, self->misses, self->maxsize_O,
                                   PyDict_GET_SIZE(self->cache));
  }
This is basically the same code, just in C.

There is no reason for this to expose a tuple API. And yet it does.

  >>> @functools.lru_cache(maxsize=None)
  ... def fib(i):
  ...   if i <= 1: return 1
  ...   return fib(i-1)+fib(i-2)
  ...
  >>> for i in range(10):
  ...   print(i, fib(i))
  ...
  0 1
  1 1
  2 2
  3 3
  4 5
  5 8
  6 13
  7 21
  8 34
  9 55
  >>> fib.cache_info()
  CacheInfo(hits=17, misses=10, maxsize=None, currsize=10)
  >>> for x in fib.cache_info():
  ...   print(x)
  ...
  17
  10
  None
  10
A spot check shows no uses of namedtuple in the Python standard library which are used to keep backwards compatibility with tuples, though I only looked at a few.

So while I agree with you, it appears that the core developers do not agree with us.


Quoting the code as it is doesn't show its development history. The cache_info could have been a tuple originally then updated to a namedtuple.

Regardless, through conversation I am confident that at least one of the core devs thinks about namedtuple in that manner. Not as namedtuple's only purpose, but as one major benefit. Further, the core devs are not a single entity, but many people who don't always agree with each other.

Edit: note Raymond's comment elsewhere on this page.


You are correct, and I apologize for not looking into this more deeply first.

For the specific case under discussion, ncoghlan added cache_info() on 30 Nov 2010 - https://github.com/python/cpython/commit/234515afe594e5f9ca1...

Previously, "Performance statistics stored in f.cache_hits and f.cache_misses." With the change "Significant statistics (maxsize, size, hits, misses) are available through the f.cache_info() named tuple."

There was no time when it was a tuple, much less in a tuple in a public release.

I'll look first at the namedtuples which have been present since at least 2.7.10.

The named tuples in "urllib/parse", "difflib", "inspect", "sched", and "doctest" were originally tuples. The decimal module's uses a DecimalTuple as a return value from to_tuple(), so I didn't check that history.

ssl uses a DefaultVerifyPaths added by tiran on 9 Jun 2013 - https://github.com/python/cpython/commit/6d7ad13a458afdf2cbd... . It did not replace a tuple. I assume _ASN1Object is the same way.

Now for the namedtuples in recent version control:

dis.py:_Instruction was added in https://github.com/python/cpython/commit/b39fd0c9b8dc6683924... . It did not replace a tuple.

aifc.py:_aifc_params replaces a tuple.

crypt.py:_Method replaced a class with a namedtuple in https://github.com/python/cpython/commit/daa5799cb8866785543... .

nntplib.py:GroupInfo and ArticleInfo added in https://github.com/python/cpython/commit/69ab95105f5105b8337... . It did not replace a tuple.

pkgutil.py:ModuleInfo replaces a tuple

platform.py:uname_result replaces a tuple

sndhdr.py:SndHeader replaces a tuple

sunau.py:_sunau_params replaces a tuple

tokenize.py:TokenInfo replaces a tuple

typing.py:nm_tpl ... is above my paygrade to understand. I believe it's required to be a namedtuple since that what it's trying to implement.

wave.py:_wave_params replaces a tuple.

Finally, urllib/robotparser.py appears to contain a bug in the following:

  req_rate = collections.namedtuple('req_rate',
                                    'requests seconds')
  entry.req_rate = req_rate
  entry.req_rate.requests = int(numbers[0])
  entry.req_rate.seconds = int(numbers[1])
As I read it, this should fail as the req_rate is immutable. This appears to be written by someone who thinks that it's a lightweight object. The commit is https://github.com/python/cpython/commit/960e848f0d32399824d... . It did not replace a tuple.

In conclusion, 8 of the current uses of namedtuple are not to emulate a tuple. 13 of the current uses replace a tuple.


BTW, what tool did you use for your commit/diff search?


Brute force. grep to find where the namedtuples are used in version control and in Python 2.7, 'git annotate' to find the last time the 'namedtuple' code was touched, then manually browse the github history view to search for when the code was added.

If a given function was in python2.7 then it was easier. I checked if the code used to be a tuple.


Ah well. As far as I know, there's only BigQuery for full search of GitHub commits. They have some free usage, but it can quickly add up to $$ if you're not careful to restrict the query space.


Now that I think about it, I believe I could have used a git bisect using grep as the run command.


Good research. If you found a bug, report it!


Thank you. I will not. I reported a bug once and did not find the process at all enjoyable. It felt like several of the people in the thread thought my use case was stupid, and were quite blunt about it, to the point where I questioned my own competency. This bug isn't important enough for me to deal with that environment again.


That's frustrating. I've faced similar questions about use case, but the tone was simply that I'm an outlier rather than incompetent. Other times, the response has been quite pleasant and appreciative. I suppose it depends on which module you're reporting and who gets notified.


> There is no reason for this to expose a tuple API.

FWIW, that is a norm in the standard library that predates named tuples.

For example, sys.version_info and time.localtime() return structseq instances. Those both have a named tuple API. The are tuple subclasses with named fields. Accordingly, they are indexable, iterable, hashable, space-efficient, and have a self-documenting __repr__().


In my original comment I wrote "'namedtuple' has one good use - migration from a tuple object to a real object while allowing backwards compatability to the tuple API. For example, the old tuples once returned from os.stat() and time.gmtime()."

The two examples you gave are examples of that good use.

The time.localtime() used to return a tuple. See https://github.com/python/cpython/blob/770e4042db8a1c277b5d9... . The change to StructTimeType is an example of how to do that sort of migration.

Similarly, sys.version_info() used to return a tuple. See https://github.com/python/cpython/blob/193a3f6d37c1440b049f7... . Again, a good example of how to migrate.

However, the "this" in the "There is no reason for this to expose a tuple API" which you quoted refers specifically to the namedtuple that cache_info() returns. That did not replace a tuple, it replaced two attributes, as you can see at https://github.com/python/cpython/commit/234515afe594e5f9ca1... .

And returning a simple type with a couple of attributes while also (and needlessly) treating it as a tuple was never a norm in Python before namedtuples, if only because it was a lot of work.


> it was a lot of work

Thus the desire for types.SimpleNamespace as others have noted.

Though we might have better design choices today than yesterday, there's also a downside to code churn. Case in point the annoyance of trying to modernize threading from isAlive to is_alive.

Even when writing new code, it's often wiser to stick with a "worse" standard than to use a new design.


I agree with all of that (with the proviso that the SimpleNamespace documentation doesn't make it sound desirable).

I disagree with raymondh's statement that "that is a norm in the standard library that predates named tuples."


Moving "import" statements into dynamically-controlled blocks goes a long way in my experience, despite being flagged by tools like "pylint".

Buried imports free the interpreter from doing something until it is actually required; really nice if you just want to run "--help" and not wait 4 seconds for utterly unrelated modules to be found. It also creates this interesting situation where a script can technically be broken (e.g. module not found) but you don't care as long as the part you're using is OK.

Grouped imports are undoubtedly nice for purity and easily seeing dependencies but they may not be smart in a dynamic language. It is still pretty easy to "grep" to find imports if you're trying to track dependencies.


Would be nice if CPython had an option to enable "lazy imports".


One side benefit to removing the use of "eval" is that namedtuples will then be pickle-able. I'm surprised this hasn't come up: every now and then I hit some weird data I want to quick-and-dirty serialize but can't because there's a namedtuple buried deep inside it.


Namedtuples are pickle-able when created in the global scope, just as any dynamically created class. The main section of the collections module demonstrates that feature (read the source, Luke).

Removing eval from tuple would not solve your issue. You'd need to instead remove eval from pickle, of sorts. The trouble is that pickle tries to look up your namedtuple by name and can't find it in your module.


This IMO is a bigger reason to change how namedtuples are implemented than their initialization performance.


One of the comments below the article mentions this, but in my experience on Linux and Mac, by far the biggest culprit is pkg_resources and its nasty habit of spidering the filesystem:

https://github.com/pypa/setuptools/issues/926

There are hacks to get around it, but it's a deep hole.


People often complain that using `entry_points` in setup.py is slow. I did some profiling and it came down to it importing some `pip` submodule (pip.vendor.something), which imported basically the whole of pip.

:(


Mercurial did a lot of work to reduce startup time via its demandimport module.

https://www.mercurial-scm.org/repo/hg/file/tip/hgdemandimpor...

Basically, it's a lazy loading of all imports. You can write `import foo` but it won't actually be imported until you do `foo.whatever()`.

It's a crutch, and it's true that hg still pays the overall startup cost of Python. However, even with "45 times slower than git", the situation is not so dire:

  jordi@chloe:~$ time hg --version
  Mercurial Distributed SCM (version 4.2.2)
  (see https://mercurial-scm.org for more information)

  Copyright (C) 2005-2017 Matt Mackall and others
  This is free software; see the source for copying conditions. There is NO
  warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

  real	0m0.096s
  user	0m0.084s
  sys	0m0.008s

  jordi@chloe:~$ time git --version
  git version 2.1.4

  real	0m0.001s
  user	0m0.000s
  sys	0m0.000s
I really can barely perceive a difference between 0.096s and 0.001s. Since it really is a one-time startup cost, it's not like we can even say that this difference accumulates and that hg is overall 45 times slower than git.

Pierre-Yves also has an interesting talk about all of the tricks that have to go into hg in order to make it fast with Python. There's stuff like doing `method = object.method; method()` instead of doing `object.method()` over and over again to avoid paying the cost of method lookup and so forth:

https://2015.pycon.ca/en/schedule/53/


>I really can barely perceive a difference between 0.096s and 0.001s. Since it really is a one-time startup cost, it's not like we can even say that this difference accumulates and that hg is overall 45 times slower than git.

It's bad for many things -- like showing VCS status on your shell prompt or IDE status line, where you repeatedly start hg.

100ms+ delay every time you hit enter in your shell is noticeable (and that's just the version, now doing an actual "dirty" check takes more).


Mercurial has something called command server[1], it's essentially a daemon listening on sockets to receive commands, designed just to avoid the cost of startup. I had to use it once in a CI system where we did invoke hg thousands of times for some reason i can't even remember.

That can also be used for frequent polling like shell status.

[1] https://www.mercurial-scm.org/wiki/CommandServer


In fact I have hg locally aliased to chg and have not noticed any issues with it. Just much faster command completions and an actually usable shell that includes hg annotations.


Put this at the end of your bashrc:

  PS1="${PS1}\$(sleep 0.1)"
Then tomorrow, we have another talk about how you can "barely perceive a difference" between 1 and 100 ms.


Okay. I just tried it right now and can't really notice after a few minutes. How can I reach you tomorrow to confirm? Do you have email? Email me at jordigh@octave.org and we can keep talking.


Interesting. I definitely noticed when I improved my prompt to render in 5 ms instead of 50 ms. There is no visible delay anymore between one command having executed and the prompt showing up and being ready to accept the next command. The shell feels much snappier as a result.


So, after a day of using this, I can tell you that I kind of notice it if I'm paying attention, but most of the time I simply don't really care.


Just curious, are you used to SSHing from the other side of the earth?


No, my ssh sessions usually feel instantaneous.


BDFL says:

> Concluding, I think we should move on from the original implementation and optimize the heck out of namedtuple.

I remember reading this on the mailing list and thinking "yes, namedtuple deserves optimization because it's an excellent resource".

For my use cases, I don't think I notice Python's startup time. When I write Python code it's usually not performance critical. When I do write performance-critical Python code, I usually care about total runtime and PyPy is usually a win here. Aside: if you use PyPy, you should probably be using namedtuples. IIRC it models those much better than it does dicts. And IMO namedtuples model many common data structures better than dicts do.

Just because I don't feel the pain of Python's startup time doesn't mean we shouldn't try to optimize it. I think I find my way into some esoteric Python/CPython corners but FWIW I've never needed namedtuple's "_source" attr.


Ruby's idea of a hash where the keys are colon symbols seems to really strike a sweet spot here— it's fast and storage-efficient like a tuple, but has the flexibility of a dict.


One big advantage of using namedtuples, aside from the immutability I mentioned elsewhere, is that you can rely on a namedtuple having particular fields. So if you're using dictionaries, you have to worry about KeyError wherever the dict is used, but if you're using namedtuples, you can't create them without assigning values to all of the fields. Dictionaries are still great for their intended purpose -- to represent a mapping -- but namedtuples are nicer if what you really want is a lightweight object.


Setting default values for the namedtuple requires subclassing it, though, which is its own little bag of fun. See all the wailing and gnashing of teeth on SO: https://stackoverflow.com/search?q=namedtuple+subclass


If you're using Python 3.6, it's a lot easier now. You can do:

from typing import NamedTuple

class Point(NamedTuple): x: float y: float = 0.0

and then Point(1.0).y == 0.0.


That's quite nice!


When this is an issue, I tend to create a namedtuple with default values for the fields at the same time I'm defining the namedtuple type itself.

    NT = namedtuple("NT", ('a', 'b'))
    default_NT = NT(42, 'foo')
And then later, if I want to initialize only one field:

    my_new_NT = default_NT._replace(a=7)


I guess that works, but it's a pretty big departure from the expected API of a normal namedtuple, or any other object-with-constructor, for that matter.


If you really want it to look more traditional, you could do:

    NT.new = lambda **kwargs: NT(42, 'foo')._replace(**kwargs)
Now you can make NT objects like so:

    nt1 = NT.new()
    nt2 = NT.new(b='bar')
You can call it something else if calling it "new" makes you feel weird. (I'd call it something else, but naming things is hard.)


AFAIK, that wasn't Ruby's idea.


> The approach of generating source code and exec()ing it, is a cool demonstration of Python's expressive power

Man, I don't see many positive opinions of exec out there.

> I think we should move on from the original implementation and optimize the heck out of namedtuple. The original has served us well. The world is constantly changing. Python should adapt to the (happy) fact that it's being used for systems larger than any of us could imagine 15 years ago.

I really wish the BDFL could say the same for CPython itself. This continued insistence of treating python as a teaching language with a teaching implementation is really weird.


Raymond Hettinger, if I remember correctly, is the person who originally wrote NamedTuple, and he's not shy about reminding people during his talks at conferences. I wonder if his pride or other personal feelings about that particular module could be clouding his judgment over whether or not to optimize it?


It would be more accurate to say that I led the design effort. There were dozens of people who contributed to it. Its early history and list of contributors is here: http://code.activestate.com/recipes/500261-named-tuples/

The main "personal feeling" involved is just a sense of responsibility for this code. All of us involved in the optimization business have to make judgments that strike a balance between speed, code clarity, maintainability, and risk of having bugs.

For the most part, we want to focus our optimization efforts on things that have a high payoff. For example, we're always having to decide how much of the library to write in pure Python and how much to write in C. If all we cared about was speed, the entire standard library would be written in C (in fact, there is a suggestion for Stefan Bethel to Cythonize the whole library).

In the case of the proposed patch, my judgement was that we would be net worse-off with the patch than without (in part because namedtuple costs are typically a tiny fraction of start-up time). Guido disagreed, so I will go forward with the patch. That is the way open source works.

This is less of "pride" issue and more of a difference of opinion about whether a particular patch was a net win.


I think it's more to do with him primarily teaching python (iirc) rather than working on large systems with it.

It's the same strain of rant Zed Shaw had of Py3, ie., that its harder to teach than Py2 and thef. bad. Crazy.

EDIT: OK, if not primarily teaching, spending a lot of his time teaching it. That's a big part of his motivation + worth knowing that when assessing his arguments.


I've heard Raymond tell multiple times that he consults for companies too. In any case, he is a very experienced developer and I don't think your criticism is fair.


> It's the same strain of rant Zed Shaw had of Py3, ie., that its harder to teach than Py2 and thef. bad. Crazy.

Why is wanting something to be easy to teach crazy?


Programming languages are tools for developers. They are taught in the service of software development. Optimizing for teaching has it backwards.

It can be a factor: but to place it higher than startup time is prioritising the one-off teaching experience of a few over the repeated development experience of many. For a tool for development. Crazy, no?


Ability to teach is going to be directly correlated with ability to maintain as a business. It's harder to hire people when the bar for learning the skill is higher.

The vast majority of software development is not hitting any kind of scaling or performance bottlenecks. So optimizing for simplicity isn't as crazy as you think.


I think python's start up time is going to correlate much more strongly with business success than whether Raymond's 12th slide in his 16th deck requires five minutes more time.


Absolutely not. Python always had a suboptimal performance story relative to other languages, but people use it anyway.

Businesses use Python because it is easy to pick up, not because it's fast


Nope, I've never had a project fail due to poor startup time. When it comes to the vast majority of projects, there is no scale so realistically a 5 second startup time is not even an issue for the business.


ASCII is easy to teach. Problem is ASCII doesn't work for the majority of the world.


> The approach of generating source code and exec()ing it, is a cool demonstration of Python's expressive power

This sentence is very surprising to me. If the best way to implement something is to literally build a string of source code and pass it to the interpreter, to me it means the user is unable to really express what they want in the language.

As a (terrible) example, imagine a language whose number addition operator only works on literals.

    1 + 2
    => 3

    a = 1
    b = 2
    a + b
    => ERROR!
It does, however, have a string concatenation operator that also works on variables and a way to read user-provided numbers into a variable, as well as some kind of eval function. What would you do to add two user-provided numbers? Something like the following:

    foo = read number!
    bar = read number!
    baz = concatenate x, ' + ', y
    quux = eval baz
The same thing could be stated about this language...

> The approach of generating source code and eval-ing it, is a cool demonstration of <this language>'s expressive power.

...but it clearly wouldn't make sense. Functions like exec and eval are an escape hatch for when there is no sensible way of expressing something in the language.


I’d say that the existence of an escape hatch for things that can’t be expressed natively is itself a form of expressitivity. It’s one that’s near ubiquitous among dynamically typed languages, so it doesn’t say much that Python has it, but it still provides a favorable comparison to some static languages.

That said, this particular use case for eval is basically macros lite: the code being evaluated doesn’t generally depend on input to the resulting program, so it could have been expressed as a transformation to the module source code, i.e. a macro. And a true macro would be able to provide more natural syntax, whereas namedtuple requires stuffing the field names into a string (or using a wrapper based on some metaclass thing, but the syntax for that isn’t great either). Thus, languages that have macro support, both dynamically typed (Lisp) and statically (e.g. Rust), could be called more expressive in this respect.


No one is claiming exec-ing generated source code is the best way to implement something (namedtuple in this case). The correct way is to use metaclass, but as I noted in another comment exec-ing source code provides a significant speed up to using metaclasses. I imagine that was the rationale behind this particular namedtuple implementation.


Ah, slow startups on unittests is a big headache. This is especially apparent on big projects like Django where you might as well leave your desk and grab coffee when the unittests start, even for small projects. That said, I can't help but think that this is just an inevitable tradeoff of Python. You want great expressive power? You have to sacrifice speed. Awesome constructs like namedtuple make this worth it imo.


the Django tests are slow to start up because they set up a whole database, namedtuple optimisations won't help that


I can't help but think if a one-time initialisation cost is bogging a process down, there are bigger problems than that initialisation cost.


There are processes that you might need to run frequently. I gave an example above: showing the output of a Python script on your shell prompt (e.g. mercurial status).


I moved from Python to Go for rendering my shell prompt for much the same reason: https://blog.bethselamin.de/posts/latency-matters.html


Funnily I did exactly the same -- rewrote parts of a powerline (shell prompt status enhancements) implementation that I used and was written in Python, and turned it into Go.

But it still has to call into Python to get my mercurial status (we use mercurial at work so I need to have that).


If you're running hundreds of very small tests, that one time initialization could take up a large portion of time.


So...maybe...consolidate the process of executing hundreds of little things?

How tiny do the steps in this thought process need to be? We can go on all night.


It's cleanest if each test runs in a new execution context, so that tests can't effect one another.


That's a naive solution to a tiny problem.


I agree with optimizing Python's startup time, and I agree that namedtuple is weird and should be changed.

But I doubt that the namedtuple change will noticeably decrease startup time for most applications (after having experience with this problem for 10+ years).

From a comment over 3 years ago:

[Python interpreter startup does] random access I/O (the slowest thing your computer can do) proportional to (large constant factor) x (num imports in program) x (length of PYTHONPATH).

https://news.ycombinator.com/item?id=7842873

I don't think that instantiating a Python parser 100 times for exec() is going to compare to that. I guess the difference is that I'm thinking about the cold cache case -- namedtuple might be noticeable in the warm cache case.

And there are many many command line tools that start slow because they're written in Python, not just Mercurial.

Mercurial is actually one of the best because they care about it (demandimport) and they don't have too many dependencies. IIRC some of the Google Cloud tools take 500-1000+ ms to start because they are crawling tons of dependencies with long PYTHONPATH.


If you are really interested in improving the startup cost then the first thing is to put everything, including the Python standard library, into .zip files, so it can be zip imported. That removes a lot of the directory/stat overhead. The comment you pointed to did not do that optimization.

For example, I supported a 10 year old set of CGI scripts running on a machine with Lustre filesystem where every filesystem metadata lookup was painfully slow. (See ehiggs' comment at lwn.) I spent time to trim away every import to the bare minimum.

Once you do that, other factors become more significant. (Remove your biggest problem and something else becomes your biggest problem.)

If the view is that a few milliseconds here and a few milliseconds there isn't a problem, well, those milliseconds add up.

'import scipy' (which depends on numpy) takes 3x longer to import than Python itself does. I have replaced dependencies on scipy when the overhead of importing the one function I needed took longer than my program took to run.

There's an old story along these lines, recounted at https://www.folklore.org/StoryView.py?story=Saving_Lives.txt :

> One of the things that bothered Steve Jobs the most was the time that it took to boot when the Mac was first powered on. ... One afternoon, Steve came up with an original way to motivate us to make it faster. ...

> "You know, I've been thinking about it. How many people are going to be using the Macintosh? A million? No, more than that. In a few years, I bet five million people will be booting up their Macintoshes at least once a day."

> "Well, let's say you can shave 10 seconds off of the boot time. Multiply that by five million users and thats 50 million seconds, every single day. Over a year, that's probably dozens of lifetimes. So if you make it boot ten seconds faster, you've saved a dozen lives. That's really worth it, don't you think?"

How many people use Python? How many times is it started per day?


The other way around this is containers or caching filesystems.


Does this mean the import mechanism caches loaded module, but not where to find non imported things ? I would expect that it crawl dirs from the PYTHON PATH once and then get data from a cache. I know you can extend the PYTHON PATH dynamically, but you could intercept that in dunder methods to update said cache.

Also: for the same interpreter, the same "*.pyc" files and the same PYTHON PATH, we could dump a cache file and share it across interpreter starts if no dirs in PYTHON PATH have been touched.

If it's not done, then it might be interesting to bring that up to the Python ideas mailing list.


I'm not sure exactly what you mean, but I think one problem is that the dirs in PYTHONPATH are only the roots. You have to consider changes to all the subdirectories of entries in PYTHONPATH too.

The import mechanism is quite complicated, so while I think you could do some more aggressive .pyc-like caching, it would also be pretty hard to get right.

Personally what I do is avoid Python packaging tools like setuptools, pip, virtualenv, because they tend to create long PYTHONPATH (they also are hacks piled on top of hacks IMO.)

I just write a shell script that downloads dependencies and installs them with distutils / setup.py, and then I wrap the main program in a shell script that sets PYTHONPATH. Yes I know this is unorthodox but it works well for me :)


I don't mean that. I mean that you could setup a watcher on the PYTHON PATH DIRS. Then crawl all the PYTHON PATH dirs for available resources, then list that and dump in a in cache file. Then unless you have watchers telling you something changed, you pull out data from the cache, and don't have to crawl the whole thing every time you load some entry point or pkg_resource.


Just don't do this for things you intend to make public - it makes installing and packaging it really difficult


At my company we changed the interaction between two daemons by calling a command line script instead. It caused a 300ms delay in every single AJAX call and broke a dozens of end-to-end tests.

I just tried with Python 2.7 and 3.6 and the hello world startup time is 100ms which is awful lot.


I suspect one factor for why Go is so popular within Google is that (due to complex path hacks as mentioned here as well as a network file system) Python startup is so slow.

On an app I worked on the unit tests took >10 seconds to boot on each run. Among the languages available to Google developers, Go ends up being the lightest weight, despite there being plenty of other lighter languages available outside. (This is not the place to start an argument about language plurality.)


I feel like the namedtuple issue could be solved cleanly by adding a feature I've wanted for a long time in Python: some simple way to cache high-level data and code in generated .pyc files. Today, there's no way to do that; Python compiles the code but evaluates nothing until execution time. I'd like my .pyc files to contain some precomputed expressions and code, to reduce startup time. I probably ought to discuss this on python-ideas.


If the namedtuple is used so heavily, why not make it a part of built-in functions?


It's funny. Go on #python on freenode and ask a question about named tuple. Consensus there is that they should be avoided in favor of real classes or attrs decorated classes.


It's unfortunate that the new version still creates __new__ using exec(). Doesn't seem necessary at all. Instead of generating the method as a string with the argument names filled in, why not use use a combination of * n and * * kw?


IMHO this namedtuple is non-issue compared to buildout.

buildout tends to append millions of path to os.path

Annnnd your python project start time requires 1 minute or more because each import requires scan millions of directories.


My question is, is it worse or better with Ruby,NodeJS, Perl,Java or Go? I remember NodeJS scripts being extremely slow to start in general, way slower than Python.


Sorted from slowest to fastest: (all single measurements on my Arch Linux notebook)

  $ time java &> /dev/null
  java &> /dev/null  0,09s user 0,03s system 120% cpu 0,102 total
  $ time ruby -e 42
  ruby -e 42  0,04s user 0,01s system 99% cpu 0,053 total
  $ time python -c 42
  python -c 42  0,03s user 0,00s system 98% cpu 0,030 total
  $ time perl -e 42  
  perl -e 42  0,00s user 0,00s system 92% cpu 0,005 total
I don't have NodeJS at hand to test with. Note that Java produces a help message (which I sent to /dev/null), but that (hopefully) shouldn't affect the timing that much.

Go produces compiled binaries and, as such, exhibits no such startup delay. Startup times for Go programs that I tested are in the same ballpark as Perl above, i.e. not reasonably measureable with a millisecond-precision tool like time(1).


heh, when I read it, I thought it was about reducing Java startup time to be on par with Python's.


    $ time python -c ''
    real	0m0.071s
    user	0m0.064s
    sys	0m0.000s

    $ time python -c 'import functools'
    real	0m0.073s
    user	0m0.052s
    sys	0m0.016s
I mean...


It's not exactly a reliable benchmark for many reasons...


  $ time python -c 'print(1)'
  1

  real	0m0.067s
  user	0m0.015s
  sys	0m0.025s
Is it slow? That was Python 2.7.10, and for me Python 3.6 is similarly fast.


On my system (macOS) I see a minimum of 0m0.034 for Python 3.6.1 and a minimum of 0m0.022 for Python 2.7.13 -- so Python 3.6 is 50% slower.

This is a somewhat contrived example, however. As the article notes, it's worse when you have a lot of imports (and I don't know how that'd differ between versions 2 and 3). I remember talking to some folks at Google who work on the Google Cloud CLI, and they said that Python startup time is a constant problem for their (large) codebase. Especially when it's used for tab completion, because it might run every time you press tab, and the user wants instant results.


We're dealing with this on a CLI build tools project I hack on (catkin_tools). It uses plugin discovery based on entry_points, and short of caching plugins in a homedir file, it's basically impossible to make it fast enough to use with something like argcomplete.

Relevant ticket: https://github.com/catkin/catkin_tools/issues/297


Why on earth would they think invoking a new interpreter each time someone hits "tab" would do anything except perform suboptimally?


Aha, tab completion makes sense. Thanks!


If you had read the article, you would see that the topic of discussion is reducing startup time when many modules are loaded due to the slow nature of `namedtuple`.


I spent a lot of time analyzing the startup time of Java, Python, and Node.js. For any real-world program, startup time is entirely dominated by time taken to load code into memory, which in python is import statements. Also, as slow as Python 2.7 is, Node.js, at least when I measured it, was twice as slow.


Here's some benchmarks i've come across - http://onetom.rebol.info/2013/08/18/startup-time-of-interpre...


The discussion is about import speed, which is a real problem:

  $ time python3 -c 'import tensorflow'

  real	0m4.522s
  user	0m1.712s
  sys	0m0.466s


Maybe only slow when importing. With mainstream pip it's often that you leverage imports a lot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: