Why did you build something different? What was the motivation compared to Bear?
I'm a heavy Bear user, recently migrating most stuff to Drafts. My problems with Bear is that it's getting slow and I don't have easy direct access to my text data. But it does have apps for all devices and drag&drop images are really useful for lab/electronics work…
For me the main issue with Bear is that it does not work with a directory of Markdown files, but locks them away in a database. If you worked directly on markdown files it would be nearly perfect. They did make a standalone markdown editor called Panda, but didn't pursue it for some reason.
I'm tired of words being misused. We have hoverboards that do not hover, self-driving cars that do not, actually, self-drive, starships that will never fly to the stars, and "open"… I can't even describe what it's used for, except everybody wants to call themselves "open".
These controversies erupt regularly, and I hope that you will see a common thing with most of them: you make a decision for your users without informing them.
Please fight this hubris. Your users matter. Many of us use your tools for everyday work and do not appreciate having the rug pulled from under them on a regular basis, much less so in an underhanded and undisclosed way.
I don't mind the bugs, these will happen. What I do not appreciate is secretly changing things that are likely to decrease performance.
That is not what I wrote. The phrases "without informing them", "in an underhanded and undisclosed way" and "secretly changing things" were important. I'm all for product evolution, but users should be informed when the product is changed, especially when the change can be for the worse (like dumbing down the model).
I've spent my entire working career dealing with companies that do the opposite. The product still goes stale. Find a better excuse.
You're acquiring users as a recurring revenue source. Consider stability and transparency of implementation details cost of doing business, or hemorrhage users as a result.
While I hate all the gaslighting Anthropic seems to do recently (and the fact that their harness broke the code quality, while they forbid use of third party harnesses), making decisions for users is what UX is.
See also the difference between eg. MacOS (with large M, the older good versions) and waiting for "Year of linux on desktop".
I don't think the issue is making decisions for users, but trying to switch off the soup tap in the all-you-can-eat soup bar. Or, wrong business model setting wrong incentives to both sides.
The peripherals are great until you realize there's a dirth of software to use with them .. like, GPS is fun and everything, but not if all you've got is the coordinates ..
I really love the idea of LoRA in a watch though, so I hope that once this gets shipped, the software makes some leaps and bounds ..
I think, like with the rest of Clojure, none of this is "revolutionary" in itself. Clojure doesn't try to be revolutionary, it's a bunch of existing ideas implemented together in a cohesive whole that can be used to build real complex systems (Rich Hickey said so himself).
Transducers are not new or revolutionary. The ideas have been around for a long time, I still remember using SERIES in Common Lisp to get more performance without creating intermediate data structures. You can probably decompose transducers into several ideas put together, and each one of those can be reproduced in another way in another language. What makes them nice in Clojure is, like the rest of Clojure, the fact that they form a cohesive whole with the rest of the language and the standard library.
Yes, this is the one I wrote about. I used it quite a bit a long time ago to get more performance. I remember vaguely that the performance was indeed there, but the package wasn't that easy to use and errors were hard to debug.
Performance is one of the niceties of transducers, but the real benefits are from better code abstractions.
For example, transducers decouple the collection type from data-processing functions. So you can write (into #{} ...) (a set), (into [] ...) (a vector) or (into {} ...) (a map) — and you don't have to modify the functions that process your data, or convert a collection at the end. The functions don't care about your target data structure, or the source data structure. They only care about what they process.
The fact that no intermediate structures have to be created is an additional nicety, not really an optimization.
It is true that for simple examples the (-> ...) is easier to read and understand. But you get used to the (into) syntax quickly, and you can do so much more this way (composable pipelines built on demand!).
I'd argue for most people performance is the single best reason to use them. Exception is if you regularly use streams/channels and benefit from transforming inside of them.
To take your example, there isn't much abstraction difference between (into #{} (map inc ids)) vs (into #{} (map inc) ids), nor is there a flexibility difference. The non transducer version has the exact same benefit of allowing specification of an arbitrary destination coll and accepting just as wide range of things as the source (any seqable). Whether in a transducer or not, inc doesn't care about where its argument is coming from or going. The only difference between those two invocations is performance.
Functions already provide a ton of abstractability and the programmer will rightly ask, "why should I bother with transducers instead of just using functions?" (aka other, arbitrary functions not of the particular transducer shape) The answer is usually going to be performance.
For a literal core async pipeline, of course, there is no replacing transducers because they are built to be used there, and there is a big abstraction benefit to being able to just hand in a transducer to the pipeline or chan vs building a function that reads from one channel, transforms, and puts on another channel. I never had the impression these pipelines were widely used, but I'd love to be wrong!
Hmm. I'm not sure what you are looking for — myself, I write software that supports my living, and I'm not looking for thrills. What I get with Clojure is new concepts every couple of years or so, thought through and carefully implemented by people much smarter than me, in a way that doesn't break anything. This lets me concentrate on my work and deliver said software that supports my living. And pay the bills.
Transducers are IMHO one of the most under-appreciated features of Clojure. Once you get to know them, building transducer pipelines becomes second nature. Then you realize that a lot of data processing can be expressed as a pipeline of transformations, and you end up with reusable components that can be applied in any context.
The fact that transducers are fast (you don't incur the cost of handling intermediate data structures, nor the GC costs afterwards) is icing on the cake at this point.
Much of the code I write begins with (into ...).
And in Clojure, like with anything that has been added to the language, anything related to transducers is a first-class citizen, so you can reasonably expect library functions to have all the additional arities.
[but don't try to write stateful transducers until you feel really comfortable with the concepts, they are really tricky and hard to get right]
I would disagree here. Apple actually did lose their values, or they are in the process of doing so.
Ads in App Store results, Ads in Maps (coming soon!), constant upsells and pushes of subscriptions and services, forced upgrade of Numbers/Pages/Keynote with annoying nags that can't be turned off, things are getting worse.
Also, when the word "values" is mentioned, one cannot forget about Tim Cook's donations to Trump and his overall support of Trump and cozying up to him.
I'm a heavy Bear user, recently migrating most stuff to Drafts. My problems with Bear is that it's getting slow and I don't have easy direct access to my text data. But it does have apps for all devices and drag&drop images are really useful for lab/electronics work…
reply