I think the article is trying to equate quantum computers with non-deterministic Turing Machines - which still looks like a matter for debate.
As for P & NP - aren't they defined in terms of decision problems that can be solved in polynomial time by deterministic and non-deterministic TMs respectively? Which is a distinction that is about as close to "the hardware used to perform computations" as it is possible to get while working with abstractions.
You're right. P and NP are formally defined specifically to Turing machines, but the Church-Turing thesis says that all models of general computation are equivalent. If quantum computers can solve any (which means all) NPC problems in polynomial time, then they're something entirely separate from all the models of general computation we currently know of. I can't even fathom what that even means. Wikipedia mentions some "non-Turing models" of computing, but most seem to be either jokes or highly theoretical: http://en.wikipedia.org/wiki/Zeno_machinehttp://en.wikipedia.org/wiki/Hypercomputation
When talking about complexity you usually have to drop the "turing machine as a metaphor for everything" idea very fast. In complexity theory you're interested in how fast will some different variations of computing machines (that might not be turing-complete or might by more than turing complete) be at solving some problems. Conversely, if you assume a model of computing (for example, AC0 is the model you get if your computing machine is restricted to be a circuit of constant depth plus a polynomial number of AND and OR gates; the L^HALTING model is what you get if you augment a finite-state automaton with an oracle that can tell it whether or not a given program halts). The model P is the set of problems that you can solve with a traditional turing machine with only a polynomial number of steps. It is also the set of problems that you can specify with first-order-logic formulas plus a least-fixed-point operator. NP is the set of problems solvable by a nondeterministic turing machine in polynomial time, but it also is the set of problems that a deterministic turing machine can verify a solution of in polynomial time, the class of problems for which there exists a probabilistic proof checker that can verify a solution to it with as high a probability as you want by checking only a constant number of bits from a properly-formatted proof. I could go on.
But what I'm getting at is that quantum computing today is basically trying to solve the question of where would you place the BQP (bounded-error quantum probabilistic algorithms) complexity class in the complexity zoo. If it is found equivalent to NP (that is, if a quantum computer can solve any NP problem with as high a probability as you want in polynomial time) that's what it means. There is nothing known so far that makes this a priori impossible. What there is is an exponential speedup for some problems (ie, problems that are not known to be in BPP, which is the deterministic equivalent of BQP for turing machines, are known to be in BQP), no NP-complete problem is in BQP and it does not seem possible for a quantum cmputer to solve an NP-complete problem. On the other hand, if you can prove that BQP is better than BPP but worse than NP you've just proven that P is not NP.
As for P & NP - aren't they defined in terms of decision problems that can be solved in polynomial time by deterministic and non-deterministic TMs respectively? Which is a distinction that is about as close to "the hardware used to perform computations" as it is possible to get while working with abstractions.