[go: nahoru, domu]

Skip to content
/ q-memo Public

memoization tooling for functions in the q programming language

License

Notifications You must be signed in to change notification settings

gitrj95/q-memo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 

Repository files navigation

q-memo

motivation

  • pure functions that are expensive to compute, call repeatedly, and produce relatively small results should be memoized; ie check if the arguments were seen before, and if they were, fetch the result, and if not, compute the result and cache it for future lookups
    • eg query steps against a read-only db
    • eg algorithms whose implementation lends naturally to top-down dynamic programming
  • the process of memoizing a function should be agnostic to the function, undoable, configurable, and easy
  • lru semantics for cache eviction

interface

configuration

namedescription
.memo.Ccache capacity. defaults to 100h. can be any short
.memo.Pcache table parent structure name. defaults to `cache. can be any symbol

functions

namedescription
.memo.initinitializes a cache table given a namespace and capacity
.memo.initnsinitializes a cache table given a namespace
.memo.initcapinitializes a cache table given a capacity
.memo.mkmakes a memoized copy of a function given a source, target, and cache name
.memo.mkmutmutates a given function into its memoized copy given a source
.memo.mknewmakes a memoized copy of a function given a source and target
.memo.rmremoves a memoized function and clears its cached data given a source
.memo.mvmoves a memoized function to a new cache table given a source and cache name
.memo.capreturns the capacity of a cache table given a cache name
.memo.rsresizes a cache table given a cache name and capacity

examples

q)/ .memo.init takes a [namespace;<short>]
q).memo.init[`.;2h]
`..cache.0
q)/ a cache table, named `0, is created within `cache in `.
q)cache
 | ::
0| (+`f`a!(,`;,::))!+(,`r)!,,::

q)/ .memo.initns is shorthand for .memo.init, using .memo.C as the cache capacity
q).memo.initns[]
`..cache.1

q)/ .memo.initcap is shorthand for .memo.init, using \d as the namespace
q)\d .a
q).memo.initcap 2h
`.a.cache.0

q)/ (::) or ` may be used to reference \d
q).memo.initns`
`.a.cache.1
q)/ .memo.mk takes a [<function>;<symbol>;<symbol>]
q)f:sum;.memo.mk[`f;`g;`cache.0]
`.a.g

q)/ `g is a memoized copy of `f with the same arity, and it points to `.a.cache.0
q)g 1 2 3
6
q)g 4 5 6
15
q)/ since 1 2 3 and 4 5 6 are new args, `g computes them and caches the results
q)/ in `.a.cache.0
q)cache.0
f    a     | r
-----------| --
     ::    | ::
.a.g ,1 2 3| 6
.a.g ,4 5 6| 15
q)/ subsequent invocations with recognized arguments will fetch from
q)/ here--instead of executing the underlying implementation
q)g 4 5 6
q)cache.0
f    a     | r
-----------| --
     ::    | ::
.a.g ,1 2 3| 6
.a.g ,4 5 6| 15

q)/ .memo.mkmut is shorthand for .memo.mk, mutating the function in place and
q)/ using the most recent cache table
q)h:div;.memo.mkmut`h
`.a.h

q)/ .memo.mknew is shorthand for .memo.mk, using the most recent cache table
q)j:prd;.memo.mknew[`j;`j1]
`.a.j1

q)/ .memo.mk can also take a function as a lambda
q).memo.mk[min;`f1;`cache.1]
`.a.f1

q)/ a null second argument will use the same name as the first, like .memo.mkmut
q)f2:max;.memo.mk[`f2;`;`cache.0]
`.a.f2
q)/ removing a memoized function that was made in place reverts the
q)/ implementation
q).memo.rm`h
`.a.h
q)h
div

q)/ if made with a literal, the source literal is returned
q).memo.rm`f1
min

q)/ if made with a symbol, the source symbol is returned
q).memo.rm`g
`.a.f

q)/ in all cases, the memoized copy no longer exists
q)g
'g

q)/ any removal of a memoized function clears its cached data
q)cache.0
f a | r
----| --
  ::| ::
q)/ .memo.mv takes a [<symbol>;<symbol>] as its function and cache table,
q)/ respectively
q)j1 2 3 4
q).memo.mv[`j1;`..cache.0]
`.a.j1
q)/ its cached data has moved
q)cache.1
f a | r
----| --
  ::| ::
q)\d .
q)cache.0
f     a     | r
------------| --
      ::    | ::
.a.j1 ,2 3 4| 24
q)/ and it now points to this new cache, too
q).a.j1 5 6 7
210
q)cache.0
f     a     | r
------------| ---
      ::    | ::
.a.j1 ,2 3 4| 24
.a.j1 ,5 6 7| 210
q)/ .memo.cap takes a [cache]
q).memo.cap`cache.0
2h
q)/ in practice, you probably wouldn't want to make a cache table
q)/ this small

q)/ .memo.rs takes a [cache;<short>]
q).memo.rs[`cache.0;10h]
`.a.cache.1
q).a.j1 1 2 3;.a.j1 2 3 4;cache.0
f     a     | r  
------------| ---
      ::    | :: 
.a.j1 ,2 3 4| 24 
.a.j1 ,5 6 7| 210
.a.j1 ,1 2 3| 6  

q)/ resizing a cache to below its capacity trims it
q).memo.rs[`cache.0;1h]
`..cache.0
q)cache.0
f     a     | r
------------| --
      ::    | ::
.a.j1 ,1 2 3| 6
q)/ notice the lru semantics, here
q)/ the principal reason why .memo operates via indirection is to allow named
q)/ functions within lexical scope to magically use dynamic programming.
q)/ a stupid implementation for computing the nth fibonacci number follows
q)dumb:{$[1=x;0;x<3;1;dumb[x-1]+dumb x-2]}
q)dumb 10
34
q)\t dumb 30
497
q)/ we want to first expand the cache capacity beyond its paltry 1h
q).memo.rs[`cache.1;100h];.memo.mkmut`dumb
`..dumb
q)\t dumb 30
0
q).memo.rm[`dumb][27]~dumb 27
1b
q)/ the stupid implementation is magically hundreds of times faster

pitfalls

q)/ if passed as a literal, its new name can't be null
q).memo.mknew[sums;`]
'null name

q)/ don't play with the parent structure, eg
q)/ cache:10
q)/ the library will cease to function properly

q)/ don't memoize stateful functions
q).memo.mknew[rand;`why]
`..why
q)count distinct why each 10#10
1
q)/ randomness is lost

q)/ be wary of .z.s
q)megadumb:{$[1=x;0;x<3;1;.z.s[x-1]+.z.s x-2]}
q)\t megadumb 30
436
q)\t .memo.mkmut[`megadumb]30
664
q)/ recall that .z.s produces a function literal as defined at parse-time,
q)/ so cache lookups are not used anywhere except the top of the stack
q)\t megadumb 30
0
q)\t megadumb 31
766
q)/ recursively memoize by using names
q)dumb:{$[1=x;0;x<3;1;dumb[x-1]+dumb x-2]}
q)\t .memo.mkmut[`dumb]31
0
q)megadumb[32]~dumb 32
1b

q)/ as the memo tables are global data, memoized functions that need to
q)/ write to the cache, ie take new values, cannot run in parallel.
q)/ if the function can be made logically concurrent, measure to be
q)/ sure if this trade is worth it. if, however, you know that invocations
q)/ will only read from the cache, feel free to do this in parallel
q)0<system"s"
1b
q)dumb peach 10 30
34 514229
q)dumb peach 40 50
'noupdate

q)/ don't make the cache too small. the additional overhead incurred by
q)/ both cache misses and constant evictions will drastically impact
q)/ performance. this is particularly relevant in memoizing recursive
q)/ calls with a large number of leaves
q).memo.rs[.memo.initns[];1h]
`..cache.2
q).memo.mknew[.memo.rm[`dumb];`ultradumb]
`..ultradumb
q)\t ultradumb 30
550

q)/ don't memoize a function multiple times. there is absolutely no
q)/ value in doing so, and its cache behavior will get complicated
q).memo.mkmut .memo.mknew[sum;`foo]
`..foo
q)count string foo
142

About

memoization tooling for functions in the q programming language

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published