Something that might be worth pointing out to folks newer to haskell is this syntax
location.x `over` (+10) $ player1
Here, `over` is not a string. This is not stringly typed programming. It translates to
over (location.x) (+ 10) $ player1
It's used to take a function that would normally be
function_name argument1 argument2
and allow you to call it like so
argument1 `function_name` argument2
so
plus 1 2 = 1 `plus` 2
There's nothing special you need to do, and it's purely a style thing. Sometimes it's easier to read.
Edit - fix from DanWaterworth, the . in location.x is (if I understand the problem correctly) function composition and therefore location.x is location . x, so my re-ordering possibly changed things more than I thought. The principle is the same, but I've wrapped location.x in parentheses, just to be sure. I'm sure someone more well versed in haskell can correct me on this :)
Yeah that's a slight niggle with Lens using (.). From an "object oriented" point of view it looks like (.) should have higher precedence than the function application, but in Haskell it doesn't.
That's why I really dislike the tendency for people to use "foo.bar" with Lens instead of "foo . bar". I find the lack of spaces makes it really easy for my brain to slip into OO mode and start forgetting precedence.
It seems like there should be a way to define player1 as its own (root) lens so that one is simply writing player1.location.x .~ (+10), or so. Are we simply not doing this because it makes reading the data structure rather difficult?
That can kind of be done using `Context`s, but those are really just implementation details and should rarely be used.
Broadly, you wouldn't want to do that because it implies mutation. There's not much value in `player1.location.x %~ (+10)` because it can't modify player1... just return a modified (player1'new).
Of course, stateful mutation is just a monadic effect, so you can wrap it up into a State monad, which Lens already has shortcuts for
do
location.x += 10
location.y += 10
which has type like `MonadState ThingWithLocation m => m ()` and we can then apply it to whatever `player` we want as a pure function.
I've used the same idea behind lenses and bijective functions to fairly good effect as a basis for declarative data binding in a UI framework (actually, combined AJAX and web server framework).
When your data mappings can be expressed this way, it's easy to get a lot done with very little work; things like RoR+JS-du-jour look primitive in comparison.
A potential downside is that you have to warp your thinking a little bit in order to make them fit the paradigm, and some instinctive imperative approaches don't work as well.
No specific link. My primary interest in the topic is very applied; primarily in bidirectional data binding expressions mapping data and business objects to UI controls, possibly over a transport.
A lens is just a pair of (setter, getter). It's a first-class reference that you can pass around, much like you'd pass around a pointer to a location; except you can transform it. If you use bijective functions to do the transformation, you can wrap both the getter and setter, one with the function and the other with its inverse. These lenses are a little like pipes, wormholes or ducts that are attached at one end to your model / data layer / business layer / wherever.
I simply plugged them into the UI, and added some extra control information for reactive UI updates at the web application server request / response boundary. The upshot was that the user could poke at the UI, a request would be sent down effectively addressing that particular lens endpoint, business logic as necessary would run, and then the minimal UI update would come back in the response - only lenses whose values had changed would be sent back. The business logic never had to specifically tell the UI which bits needed to be refreshed; the bits that needed to be sent were inferred automatically.
I wrote a data-binding language specifically to implement this scheme. It was called Gravity, in part because it accreted more and more responsibility in the system :)
The general idea is that a tree of associative data structures (which include integer-indexed vectors) can have a path to a location in the tree reified as normal data. Then, you can run any ordinary function on a deep position in the tree without any of the applicative functors machinery.
Although I really like the fact that lenses (from the `lens` package) can be composed using regular function composition, instead of the more generalized category composition like `fclabels` uses, I still think it's weird that the direction of composition is flipped.
I reads more like OO, but a lot less like idiomatic Haskell.
I also think it's a bit weird to flip direction of composition, but there are a couple of advantages --
1. It keeps the "imperative" feel, so that you can write
person.address.zipCode
rather than
zipCode.address.person
2. There's an argument that mathematical functions are "back to front" and that it would be more natural to write functions left to right, i.e. (x)f rather than f(x), so that the function composition f.g is "apply f then apply g" rather than "apply g then apply f" which is where we are at the moment.
> There's an argument that mathematical functions are "back to front" and that it would be more natural to write functions left to right
Totally agreed, but until programming languages adopt "x f = ..." for a definition of f then I'd prefer to stick to composing the traditional way round.
It's a bit dated now, to be sure, as there have since been many developments in the field of "bidirectional programming". See, for example, Daniel Wagner's paper on "Edit Lenses":
I still don't get how people can claim that functional programming is inituative when mutating a nested struct requires a separate package and metaprogramming hacks.
| mutating a nested struct requires a separate package and metaprogramming hacks.
But it doesn't. And you would have known this if only you had read the fucking article.
data Position = Position { x :: Int, y :: Int }
data Player = Player { pos :: Position }
moveRight :: Int -> Player -> Player
moveRight n (Player (Position x y)) = Player (Position (x + n) y)
Nah. Not if you use an appropriate form to extract the desired member while leaving the rest alone.
data Person = Person { name :: String ,
age :: Integer ,
height :: Integer ,
weight :: Integer
}
deriving (Show)
makeOlder p@(Person { age = current }) increase = p { age = current + increase }
main = let b = (Person { name = "Bob", age = 50, height = 72, weight = 190 })
c = b { age = (age b) + 10 }
d = makeOlder b 10
in do
putStrLn (show b)
putStrLn (show c)
putStrLn (show d)
-- ~/test/E$ ./M
-- Person {name = "Bob", age = 50, height = 72, weight = 190}
-- Person {name = "Bob", age = 60, height = 72, weight = 190}
-- Person {name = "Bob", age = 60, height = 72, weight = 190}
Because player and position are records, the function could actually be written:
moveRight :: Int -> Player -> Player
moveRight n p@(Player {pos = curPos@(Position {x = curX})) =
p {pos = curPos {x = curX + n}}
In which case it isn't making any assumptions about extra properties. The record pattern-matching syntax isn't the prettiest, though, and I find myself having to look it up every time to remember where the accessor name (pos), parameter name (curPos), and intended pattern (Position...) go relative to the "=" and "@".
In the most naive way of doing such things, there is indeed such a tradeoff. So let's say that my Player is now allowed to collect Rice, which he will use in certain eat() functions.
If Player is a data structure, I could add the rice counter to the Player and then modify all of my functions to "pass through" the rice. If the game is small enough, this makes the most sense. This is not as bad as it sounds because the typechecker will get angry about any Player functions in the code which do not have rice. So I've got the language on my side when I want to "make sure I've got all of them," it's not just a haphazard grep for the token "Player".
I could also write everything dogmatically with a pattern matching syntax which passes through all of the rest of the data. This is embodied in the examples below so I won't re-write it.
On the other hand, I can treat the game as a module, create a new data structure `PlayerWithRice {player :: Player, rice :: Int}`. Then I can define `addRice :: (Player -> Player) -> PlayerWithRice -> PlayerWithRice` which takes a function for modifying players without rice, and turns it into a function for modifying players with. Then there's just a list of new declarations in a new module which add rice to existing code.
This all sounds complicated! Surely there's something simpler? There are a few simpler ideas: one is called a Monad Transformer. The Player can be stored in a slightly more complicated way at first, so that it "wraps" another object which holds "all future data-to-be-added". All of your existing code written in this context will just automatically pass this 'future' data through. For PlayerWithRice you simply fill in the 'future' data as a data structure which holds rice and also more future data.
Second, there are lenses, which the above article discusses. If you write every set/get with lenses, then updating the data structure only requires updating those lenses, and all the rest of the code will follow. Again, you'll have a type-checker on your side to guarantee that you didn't miss anything crazy when you update those lenses.
Lenses are a way to try to minimize that effect, because it does "kind of" happen. This is one end of what Wadler termed the expression problem, essentially most languages are structured such that either the nouns or the verbs are complex since if they both are then you have n^2 interactions.
Haskell chooses to make functions complex to enable simpler data types. This means there's a lot of drive to have useful, abstract interfaces, but it also means that you have to protect against the kind of complexity you get when expanding your objects.
Lenses help to do this by abstracting "views" on objects so that you only have to update the lens... and that can be done automatically.
Why do you need to mutate things in order to write a program? Operational programming may have damaged your brain.
(With regards to lenses, note that: This isn't mutating anything, and it's not a metaprogramming "hack" either. Functions as first-class entities instead of opaque blobs of code is one of the big wins of functional programming.)
It's a trade-off. You can either use direct mutation using some sort of impure reference type, or use a pure mutation using first class accessor labels.
Impure references are probably easier to use, but are harder to reason about and allow for more subtle bugs in your code. Accessor labels require a bit more trickery, but give a more principled approach and result in code that is less error-prone and easier to compose.
Haskell doesn't force you in any of those directions. Pick the approach that bests fits your domain and likings. Personally I'm convinced pure code is a way better default, however for certain domains I certainly resort to using `IORef`s, `TVar`s, etc.
I think it's a great convention for Haskell to use. Learn You a Haskell uses it to wonderful effect. Haskell is hard to learn (at least at first). Even for people with a functional programming background. The cartoons makes it look like a children's book, which has a psychology effect of making the material seem easier.
I like FP as much as the next guy (who dabbles in it out of curiosity). But I can't get over thinking that the need to invoke function composition composition for operations which are trivial in other languages suggests that somewhere we ended up trying to use the wrong tool for the job.
This isn't the first time I got myself wondering "where do people derive these ideas from?" I need to read and re-read (and perhaps read once again) a post full of pictures every time someone try to explain me a new concept in Haskell. And the more it happens the more I'm convinced that this language is not an everyday tool.
I don't want people to misinterpret me, I decided to learn Haskell and I don't regret it. But I often feel it's more like trying to cut vegetables with a katana: it's challenging, it's fun to practice, and surely it's a powerful tool, but even if I become a master at it I won't accomplish much alone, and I definetly won't want people trying this next to me.
The thought of a bad programmer trying to look smart while fiddling in my Haskell code scares me to no end.
Edit - fix from DanWaterworth, the . in location.x is (if I understand the problem correctly) function composition and therefore location.x is location . x, so my re-ordering possibly changed things more than I thought. The principle is the same, but I've wrapped location.x in parentheses, just to be sure. I'm sure someone more well versed in haskell can correct me on this :)