2023年8月1日发(作者:)
序列⼆次规划_函数式的动态规划函数式的动态规划动态规划是⼀类很常⽤的算法,在C/C++/Java中⼀般使⽤于数组进⾏记忆化。⽽函数式编程语⾔⼀般⽆法⽅便地操作数组这些依赖副作⽤的数据结构,函数式的记忆化便要另寻他法。本⽂就是⼀个简单的笔记,⽤⼀些代码⽚段展⽰我所知的函数式动态规划的技巧。(2020/5/17,时隔五个⽉后的更新,新增Memocombinators)Course-of-Values RecursionCourse-of-Values Recursion是我认为最直观的⼀种技巧,就是将遍历过的结果当作返回值的⼀部分保留下来,在递归的时候可以取得运算过的值。对于递归函数f,定义⼀个辅助的函数bar则原递归函数f可以⽤bar表⽰出来:斐波那契数列:fibBar :: Int -> [Int]fibBar 0 = [0]fibBar 1 = 1:fibBar 0fibBar n = let course = fibBar (n-1) in -- [fib(n-1)..fib(0)] let p = course !! 0 in -- fib(n-1) let pp = course !! 1 in -- fib(n-2) p + pp : course-- >>> fibBar 10-- [55,34,21,13,8,5,3,2,1,1,0]--Binary Partitions:数的⼆次幂拆分⽅法有多少种,其状态转移⽅程为:则:bpBar n | n == 0 = [1] | even n = let course = bpBar (n-1) in let pred = course !! 0 in -- bp (n-1) let half = course !! (n-1 - n `div` 2) in -- bp (n/2) pred + half : course | otherwise = let course = bpBar (n-1) in head course : course-- >>> bpBar 20-- [60,46,46,36,36,26,26,20,20,14,14,10,10,6,6,4,4,2,2,1,1]--但遗憾的是,其复杂度并不是O(n),因为每次都会索引链表,这很糟糕。0-1背包问题:其状态转移⽅程为:这⾥也需要将这个n*W状态空间塞到course⾥:则:type Weight = Inttype Value = Doubletype Items = [(Weight, Value)]knapsack :: Items -> Weight -> Valueknapsack items capacity = head $ bar items capacity where bar :: Items -> Weight -> [Value] bar [] 0 = [0] bar (_:items) 0 = 0 : bar items capacity bar [] y = 0 : bar [] (y-1) bar ((w, v):items) y | w <= y = let course = bar ((w, v):items) (y-1) in let v1 = course !! capacity in -- knapsack(i-1, y) let v2 = course !! (capacity + w) in -- knapsack(i-1, y-wi) let new = max v1 (v2 + v) in new : course | otherwise = let course = bar ((w, v):items) (y-1) in
course !! capacity : course-- >>> knapsack [(2, 6.0), (2, 3.0), (6, 5.0), (5, 4.0), (4, 6.0)] 10-- 15.0--CoV除了经常要索引链表意外还有其它限制,并⾮所有的递归函数都能转化为这种形式,⽐如阿克曼函数(Ackermann's function)。Streaming这是专属于Haskell的优雅的⽅法。斐波那契数列就是⼀个经典的例⼦:fibs :: [Integer]fibs = 1:1:zipWith (+) fibs (tail fibs)-- >>> take 10 fibs-- [1,1,2,3,5,8,13,21,34,55]需要把fibs看作⼀个流,fibs前两个元素为1且剩余部分由其⾃⾝(fibs)与⾃⾝去⾸(tail fibs)合成(zipWith (+))。取“下⼀个”的时候才会计算,并且不会重复计算,这就是Haskell惰性求值的威⼒。类似的,阶乘流由[1..]和其⾃⾝合成:facts :: [Integer]facts = 1 : zipWith (*) [1..] facts-- >>> take 10 facts-- [1,1,2,6,24,120,720,5040,40320,362880]--流还可以表达更复杂的问题,⽐如刚刚的Binary Partitions:bps :: [Integer]bps = 1:zipWith3 reduce
[1..]
bps
(tail $ dup bps) -- bp (n/2) 组成的流 where dup xs = xs >>= x -> [x, x] -- 每个元素个数*2 reduce n a b | even n = a + b | otherwise = a-- >>> take 21 bps-- [1,1,2,2,4,4,6,6,10,10,14,14,20,20,26,26,36,36,46,46,60]--可以发现顺序是和CoV的反过来的。同时,这⾥还没有了索引的开销,很O(n),很棒。流的形式很优美,当然也很少的情况能将流写的很优美,毕竟算法本⾝就是丑陋的,⽐如刚刚0-1背包问题换成什么写法都只能写得这样丑陋(但这已经是很简单的算法了)。Dynamorphismdynamorphism是recursion schemes的⼀种,是anamorphism和histomorphism的组合,由anamorphism构造递归树,由histomorphism完成记忆化和规约。过程上可以看作是CoV的抽象,⾃动完成记忆化和递归,并且推⼴了course的结构(但⼀般还是⽤List)。-- dynamorphismdyna :: Functor f => (a -> f a) -> (f (Cofree f c) -> c) -> a -> cdyna phi psi a = let x :< _ = h a in x where h = uncurry (:<) . (a -> (psi a, a)) . fmap h . phi-- dyna phi psi = histo psi . ana phiBinary Partitions:bpDyna :: Integer -> IntegerbpDyna = dyna phi psi where phi 0 = Nil phi n = Cons n (n - 2) psi Nil = 0 psi (Cons n course)
| even n = let pred = fromJust $ lookupCourse course 0 in let half = fromJust $ lookupCourse course (n-1 - n `div` 2) in pred + half | otherwise = fromJust $ lookupCourse course 0lookupCourse :: (Num n, Eq n) => Cofree (ListF e) a -> n -> Maybe alookupCourse (x :< _) 0 = Just xlookupCourse (_ :< hole) n = case hole of Nil -> Nothing Cons _ next -> lookupCourse next (n - 1)-- >>> map bpDyna [0..20]-- [1,1,2,2,4,4,6,6,10,10,14,14,20,20,26,26,36,36,46,46,60]--最长公共⼦序列:也是⼀道经典的dp题,我也不再赘述内容,其状态转移⽅程为:lcsDyna :: Eq a => [a] -> [a] -> [a]lcsDyna as bs = dyna phi psi (as, bs) where aslen = length as phi ([], []) = Nil phi all@([], y:ys) = Cons all (as, ys) phi all@(x:xs, ys) = Cons all (xs, ys) psi Nil = [] psi (Cons ([], _) _) = [] psi (Cons (_, []) _) = [] psi (Cons (x:_, y:_) course) | x == y = x : zs | length xs > length ys = xs | otherwise = ys where xs = fromJust $ lookupCourse course 0 ys = fromJust $ lookupCourse course aslen zs = fromJust (lookupCourse course (aslen + 1))0-1背包问题knapsackDyna :: Items -> Weight -> ValueknapsackDyna goods w = dyna phi psi (goods, w) where phi ([], 0) = Nil phi (wv:goods, 0) = Cons (wv:goods, 0) (goods, w) phi (goods, w) = Cons (goods, w) (goods, w - 1) psi Nil = 0 psi (Cons ([], _) _) = 0 psi (Cons ((w', v'):goods', rest) course)
| w' > rest = fromJust $ lookupCourse course w -- course[i-1][w] | otherwise = let v1 = fromJust $ lookupCourse course w
v2 = fromJust $ lookupCourse course (w' + w) -- course[i-1][w - w'] in max v1 (v2 + v')-- >>> knapsackDyna [(2, 6.0), (2, 3.0), (6, 5.0), (5, 4.0), (4, 6.0)] 10-- 15.0--另外,这⾥的recursion schemes的库是直接在hackage⾥找的。懒得⾃⼰写了。dynamorphisim的介绍,我可能会令写⼀篇⽂章吧(咕咕咕)。其实我觉得这也没减少什么复杂度。。。Memocombinators(更新于2020/5/17,就不新开⽔⽂了)这是个⼗分简洁美观,⼏乎在各⽅⾯吊打上⾯⼏种写法。但这⽅法本⾝⼜挺tricky的,和haskell处理partial application的⽅式有关。记忆化最直接的⽅式,便是维护⼀个表,⽤于缓存已计算的结果。⽐如:-- 接受⼀个函数,返回其记忆化的版本memoize :: (Int -> a) -> (Int -> a)memoize f n = map f [0..] !! n
缓存的内容便是map f [0..]。我们希望该函数有记忆化的效果,那么斐波那契数列就可以写成这样:fibMemo :: Int -> IntegerfibMemo = memoize fib where fib 0 = 0 fib 1 = 1 fib n = fibMemo (n - 1) + fibMemo (n - 2)
-- 这不能调⽤fib本⾝,因为fib本⾝并没有通过memoize进⾏记忆化但很遗憾,根本没有起到记忆化的效果。究其原因,当每次调⽤memoize的时候,都会重新计算⼀次map f [0..]。没有记忆化,空间占⽤还巨⼤那正确的写法究竟是怎样呢——partial application:memoize :: (Int -> a) -> (Int -> a)memoize f = (map f [0..] !!)
-- eq to `(!!) (map f [0..])`
-- but not `n -> map f [0..] !! n`Magically, it works!这是因为,在hs中,partial application是不会进⾏reduce的,仅仅绑定了参数。在memoize中参数map f [0..]与函数(!!)绑定,于是整个过程map f [0..]只会产⽣⼀次(⼀个thunk)。于是利⽤map f [0..]缓存下f计算过的值,从⽽达到memoization的效果。(补充:let cache =map f [0..] in n -> cache !! n 这样也可以)但是n -> map f [0..] !! n这样是不⾏的,每次调⽤都会重新计算⼀遍map f [0..]。在hs中,只有在“完整调⽤”的时候会进⾏reduce(call),⽽在“部分调⽤”(partial application)的时候做的只是参数的绑定(bind)。这就导致了η变换在hs中并不完全等价,⽽且会导致记忆化的“丢失”。(这⾥应该还可以引⼊CAF(Constant(bind)。这就导致了η变换在hs中并不完全等价,⽽且会导致记忆化的“丢失”。applicative form),此处不展开了)同理,如果将fibMemo写为:fibMemo :: Int -> IntegerfibMemo n = memoize fib n where -- ...也将没有了记忆化的效果。所以这个⽅法是⼀个很tricky的⽅法,利⽤了hs不那么⼀致的“feature”,才有这样的效果。另外,单链表索引太慢,我们也可以使⽤其它的数据结构来进⾏缓存——⼆叉搜索树、(hs中只读的)数组、哈希表等等(上⾯的CoV也可以)。总之memoize的⼀个通⽤的形式为:memoize :: (k -> v) -> (k -> v)memoize f = index (fmap table f)可以抽象为⼀个typeclass:class Memoizable k where memoize :: (k -> v) -> (k -> v)对于复杂类型的记忆化,可以基于Map的以下⼏条性质:Map () v ~ vMap (Either a b) v ~ (Map a v, Map b v)Map (a, b) v ~ Map a (Map b v)可以导出Memoizable k的性质:Memoizable ()(Memoizable a, Memoizable b) => Memoizable (Either a b)(Memoizable a, Memoizable b) => Memoizable (a, b)-- 分别实现我们可以得到⼀些很有⽤的函数:-- 记忆化两个参数。memoize2 :: (Memoizable a, Memoizable b) => (a -> b -> r) -> (a -> b -> r)memoize2 = curry . memoize . uncurry-- 记忆化第⼆个参数-- memoizeSecond :: Memoizable b => (a -> b -> r) -> (a -> b -> r)-- memoizeSecond = flip . memoize . flip-- 这是不⾏的,记忆化的是 `a -> r` ⽽不是 `r`现在可以来做个背包问题:knapsackMemo :: Weight -> Index -> ValueknapsackMemo = memoize2 knapsack where knapsack y 0 = 0 knapsack y i | wi >= y = knapsackMemo y (i - 1) | otherwise = max (knapsack y (i - 1)) (vi + knapsack (y - wi) (i - 1)) where ~(wi, vi) = items !! i⼗分简洁,就是直接写上状态转移⽅程。但总的来说,这种美观却tricky的⽅法是没有保障的,hs也并没有承诺它会对partial application进⾏bind的处理,更是⼀种接近于UB的存在。写起来也要⼗分的⼩⼼,当代码复杂起来,可能⼀个不⼩⼼写法上不对,就丢掉了“记忆化”。慎⽤慎⽤。Addition:-- 记忆化函数更好的写法fib f 0 = 0fib f 1 = 1fib f n = f (n - 1) + f (n - 2)fibMemo = fix (memoize . fib)-- -> memoize (fib fibMemo)总结没啥想总结的23333。总之,我觉得除了streaming以外,都很丑陋。recursion schemes也是,还是最多⽤到hylomorphism就算了,同时我认为应⽤将recursion schemes在“不是处理数据”的递归上,也不是正确的⽤途,因为这并不直观。新更新的Memocombinators在hackage中有同名的库:data-memocombinators: Combinators for building memo 前两天hs群⾥提到了该⽅法:Memoization - 看到该⽅法的时候,我便感到⼗分地羞愧(因为本⽂之前提到的⽅法过于繁杂,甚⾄可以说没啥⽤),⼗分的兴奋,却也同样⼗分地不解——究竟是怎么利⽤thunk进⾏记忆化的?后来得到店⾥的指导(我太菜了,不好意思@ 他们),才了解到区别,对此⼗分感谢。
2023年8月1日发(作者:)
序列⼆次规划_函数式的动态规划函数式的动态规划动态规划是⼀类很常⽤的算法,在C/C++/Java中⼀般使⽤于数组进⾏记忆化。⽽函数式编程语⾔⼀般⽆法⽅便地操作数组这些依赖副作⽤的数据结构,函数式的记忆化便要另寻他法。本⽂就是⼀个简单的笔记,⽤⼀些代码⽚段展⽰我所知的函数式动态规划的技巧。(2020/5/17,时隔五个⽉后的更新,新增Memocombinators)Course-of-Values RecursionCourse-of-Values Recursion是我认为最直观的⼀种技巧,就是将遍历过的结果当作返回值的⼀部分保留下来,在递归的时候可以取得运算过的值。对于递归函数f,定义⼀个辅助的函数bar则原递归函数f可以⽤bar表⽰出来:斐波那契数列:fibBar :: Int -> [Int]fibBar 0 = [0]fibBar 1 = 1:fibBar 0fibBar n = let course = fibBar (n-1) in -- [fib(n-1)..fib(0)] let p = course !! 0 in -- fib(n-1) let pp = course !! 1 in -- fib(n-2) p + pp : course-- >>> fibBar 10-- [55,34,21,13,8,5,3,2,1,1,0]--Binary Partitions:数的⼆次幂拆分⽅法有多少种,其状态转移⽅程为:则:bpBar n | n == 0 = [1] | even n = let course = bpBar (n-1) in let pred = course !! 0 in -- bp (n-1) let half = course !! (n-1 - n `div` 2) in -- bp (n/2) pred + half : course | otherwise = let course = bpBar (n-1) in head course : course-- >>> bpBar 20-- [60,46,46,36,36,26,26,20,20,14,14,10,10,6,6,4,4,2,2,1,1]--但遗憾的是,其复杂度并不是O(n),因为每次都会索引链表,这很糟糕。0-1背包问题:其状态转移⽅程为:这⾥也需要将这个n*W状态空间塞到course⾥:则:type Weight = Inttype Value = Doubletype Items = [(Weight, Value)]knapsack :: Items -> Weight -> Valueknapsack items capacity = head $ bar items capacity where bar :: Items -> Weight -> [Value] bar [] 0 = [0] bar (_:items) 0 = 0 : bar items capacity bar [] y = 0 : bar [] (y-1) bar ((w, v):items) y | w <= y = let course = bar ((w, v):items) (y-1) in let v1 = course !! capacity in -- knapsack(i-1, y) let v2 = course !! (capacity + w) in -- knapsack(i-1, y-wi) let new = max v1 (v2 + v) in new : course | otherwise = let course = bar ((w, v):items) (y-1) in
course !! capacity : course-- >>> knapsack [(2, 6.0), (2, 3.0), (6, 5.0), (5, 4.0), (4, 6.0)] 10-- 15.0--CoV除了经常要索引链表意外还有其它限制,并⾮所有的递归函数都能转化为这种形式,⽐如阿克曼函数(Ackermann's function)。Streaming这是专属于Haskell的优雅的⽅法。斐波那契数列就是⼀个经典的例⼦:fibs :: [Integer]fibs = 1:1:zipWith (+) fibs (tail fibs)-- >>> take 10 fibs-- [1,1,2,3,5,8,13,21,34,55]需要把fibs看作⼀个流,fibs前两个元素为1且剩余部分由其⾃⾝(fibs)与⾃⾝去⾸(tail fibs)合成(zipWith (+))。取“下⼀个”的时候才会计算,并且不会重复计算,这就是Haskell惰性求值的威⼒。类似的,阶乘流由[1..]和其⾃⾝合成:facts :: [Integer]facts = 1 : zipWith (*) [1..] facts-- >>> take 10 facts-- [1,1,2,6,24,120,720,5040,40320,362880]--流还可以表达更复杂的问题,⽐如刚刚的Binary Partitions:bps :: [Integer]bps = 1:zipWith3 reduce
[1..]
bps
(tail $ dup bps) -- bp (n/2) 组成的流 where dup xs = xs >>= x -> [x, x] -- 每个元素个数*2 reduce n a b | even n = a + b | otherwise = a-- >>> take 21 bps-- [1,1,2,2,4,4,6,6,10,10,14,14,20,20,26,26,36,36,46,46,60]--可以发现顺序是和CoV的反过来的。同时,这⾥还没有了索引的开销,很O(n),很棒。流的形式很优美,当然也很少的情况能将流写的很优美,毕竟算法本⾝就是丑陋的,⽐如刚刚0-1背包问题换成什么写法都只能写得这样丑陋(但这已经是很简单的算法了)。Dynamorphismdynamorphism是recursion schemes的⼀种,是anamorphism和histomorphism的组合,由anamorphism构造递归树,由histomorphism完成记忆化和规约。过程上可以看作是CoV的抽象,⾃动完成记忆化和递归,并且推⼴了course的结构(但⼀般还是⽤List)。-- dynamorphismdyna :: Functor f => (a -> f a) -> (f (Cofree f c) -> c) -> a -> cdyna phi psi a = let x :< _ = h a in x where h = uncurry (:<) . (a -> (psi a, a)) . fmap h . phi-- dyna phi psi = histo psi . ana phiBinary Partitions:bpDyna :: Integer -> IntegerbpDyna = dyna phi psi where phi 0 = Nil phi n = Cons n (n - 2) psi Nil = 0 psi (Cons n course)
| even n = let pred = fromJust $ lookupCourse course 0 in let half = fromJust $ lookupCourse course (n-1 - n `div` 2) in pred + half | otherwise = fromJust $ lookupCourse course 0lookupCourse :: (Num n, Eq n) => Cofree (ListF e) a -> n -> Maybe alookupCourse (x :< _) 0 = Just xlookupCourse (_ :< hole) n = case hole of Nil -> Nothing Cons _ next -> lookupCourse next (n - 1)-- >>> map bpDyna [0..20]-- [1,1,2,2,4,4,6,6,10,10,14,14,20,20,26,26,36,36,46,46,60]--最长公共⼦序列:也是⼀道经典的dp题,我也不再赘述内容,其状态转移⽅程为:lcsDyna :: Eq a => [a] -> [a] -> [a]lcsDyna as bs = dyna phi psi (as, bs) where aslen = length as phi ([], []) = Nil phi all@([], y:ys) = Cons all (as, ys) phi all@(x:xs, ys) = Cons all (xs, ys) psi Nil = [] psi (Cons ([], _) _) = [] psi (Cons (_, []) _) = [] psi (Cons (x:_, y:_) course) | x == y = x : zs | length xs > length ys = xs | otherwise = ys where xs = fromJust $ lookupCourse course 0 ys = fromJust $ lookupCourse course aslen zs = fromJust (lookupCourse course (aslen + 1))0-1背包问题knapsackDyna :: Items -> Weight -> ValueknapsackDyna goods w = dyna phi psi (goods, w) where phi ([], 0) = Nil phi (wv:goods, 0) = Cons (wv:goods, 0) (goods, w) phi (goods, w) = Cons (goods, w) (goods, w - 1) psi Nil = 0 psi (Cons ([], _) _) = 0 psi (Cons ((w', v'):goods', rest) course)
| w' > rest = fromJust $ lookupCourse course w -- course[i-1][w] | otherwise = let v1 = fromJust $ lookupCourse course w
v2 = fromJust $ lookupCourse course (w' + w) -- course[i-1][w - w'] in max v1 (v2 + v')-- >>> knapsackDyna [(2, 6.0), (2, 3.0), (6, 5.0), (5, 4.0), (4, 6.0)] 10-- 15.0--另外,这⾥的recursion schemes的库是直接在hackage⾥找的。懒得⾃⼰写了。dynamorphisim的介绍,我可能会令写⼀篇⽂章吧(咕咕咕)。其实我觉得这也没减少什么复杂度。。。Memocombinators(更新于2020/5/17,就不新开⽔⽂了)这是个⼗分简洁美观,⼏乎在各⽅⾯吊打上⾯⼏种写法。但这⽅法本⾝⼜挺tricky的,和haskell处理partial application的⽅式有关。记忆化最直接的⽅式,便是维护⼀个表,⽤于缓存已计算的结果。⽐如:-- 接受⼀个函数,返回其记忆化的版本memoize :: (Int -> a) -> (Int -> a)memoize f n = map f [0..] !! n
缓存的内容便是map f [0..]。我们希望该函数有记忆化的效果,那么斐波那契数列就可以写成这样:fibMemo :: Int -> IntegerfibMemo = memoize fib where fib 0 = 0 fib 1 = 1 fib n = fibMemo (n - 1) + fibMemo (n - 2)
-- 这不能调⽤fib本⾝,因为fib本⾝并没有通过memoize进⾏记忆化但很遗憾,根本没有起到记忆化的效果。究其原因,当每次调⽤memoize的时候,都会重新计算⼀次map f [0..]。没有记忆化,空间占⽤还巨⼤那正确的写法究竟是怎样呢——partial application:memoize :: (Int -> a) -> (Int -> a)memoize f = (map f [0..] !!)
-- eq to `(!!) (map f [0..])`
-- but not `n -> map f [0..] !! n`Magically, it works!这是因为,在hs中,partial application是不会进⾏reduce的,仅仅绑定了参数。在memoize中参数map f [0..]与函数(!!)绑定,于是整个过程map f [0..]只会产⽣⼀次(⼀个thunk)。于是利⽤map f [0..]缓存下f计算过的值,从⽽达到memoization的效果。(补充:let cache =map f [0..] in n -> cache !! n 这样也可以)但是n -> map f [0..] !! n这样是不⾏的,每次调⽤都会重新计算⼀遍map f [0..]。在hs中,只有在“完整调⽤”的时候会进⾏reduce(call),⽽在“部分调⽤”(partial application)的时候做的只是参数的绑定(bind)。这就导致了η变换在hs中并不完全等价,⽽且会导致记忆化的“丢失”。(这⾥应该还可以引⼊CAF(Constant(bind)。这就导致了η变换在hs中并不完全等价,⽽且会导致记忆化的“丢失”。applicative form),此处不展开了)同理,如果将fibMemo写为:fibMemo :: Int -> IntegerfibMemo n = memoize fib n where -- ...也将没有了记忆化的效果。所以这个⽅法是⼀个很tricky的⽅法,利⽤了hs不那么⼀致的“feature”,才有这样的效果。另外,单链表索引太慢,我们也可以使⽤其它的数据结构来进⾏缓存——⼆叉搜索树、(hs中只读的)数组、哈希表等等(上⾯的CoV也可以)。总之memoize的⼀个通⽤的形式为:memoize :: (k -> v) -> (k -> v)memoize f = index (fmap table f)可以抽象为⼀个typeclass:class Memoizable k where memoize :: (k -> v) -> (k -> v)对于复杂类型的记忆化,可以基于Map的以下⼏条性质:Map () v ~ vMap (Either a b) v ~ (Map a v, Map b v)Map (a, b) v ~ Map a (Map b v)可以导出Memoizable k的性质:Memoizable ()(Memoizable a, Memoizable b) => Memoizable (Either a b)(Memoizable a, Memoizable b) => Memoizable (a, b)-- 分别实现我们可以得到⼀些很有⽤的函数:-- 记忆化两个参数。memoize2 :: (Memoizable a, Memoizable b) => (a -> b -> r) -> (a -> b -> r)memoize2 = curry . memoize . uncurry-- 记忆化第⼆个参数-- memoizeSecond :: Memoizable b => (a -> b -> r) -> (a -> b -> r)-- memoizeSecond = flip . memoize . flip-- 这是不⾏的,记忆化的是 `a -> r` ⽽不是 `r`现在可以来做个背包问题:knapsackMemo :: Weight -> Index -> ValueknapsackMemo = memoize2 knapsack where knapsack y 0 = 0 knapsack y i | wi >= y = knapsackMemo y (i - 1) | otherwise = max (knapsack y (i - 1)) (vi + knapsack (y - wi) (i - 1)) where ~(wi, vi) = items !! i⼗分简洁,就是直接写上状态转移⽅程。但总的来说,这种美观却tricky的⽅法是没有保障的,hs也并没有承诺它会对partial application进⾏bind的处理,更是⼀种接近于UB的存在。写起来也要⼗分的⼩⼼,当代码复杂起来,可能⼀个不⼩⼼写法上不对,就丢掉了“记忆化”。慎⽤慎⽤。Addition:-- 记忆化函数更好的写法fib f 0 = 0fib f 1 = 1fib f n = f (n - 1) + f (n - 2)fibMemo = fix (memoize . fib)-- -> memoize (fib fibMemo)总结没啥想总结的23333。总之,我觉得除了streaming以外,都很丑陋。recursion schemes也是,还是最多⽤到hylomorphism就算了,同时我认为应⽤将recursion schemes在“不是处理数据”的递归上,也不是正确的⽤途,因为这并不直观。新更新的Memocombinators在hackage中有同名的库:data-memocombinators: Combinators for building memo 前两天hs群⾥提到了该⽅法:Memoization - 看到该⽅法的时候,我便感到⼗分地羞愧(因为本⽂之前提到的⽅法过于繁杂,甚⾄可以说没啥⽤),⼗分的兴奋,却也同样⼗分地不解——究竟是怎么利⽤thunk进⾏记忆化的?后来得到店⾥的指导(我太菜了,不好意思@ 他们),才了解到区别,对此⼗分感谢。
发布评论