Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Stack described above is the one I’ve been working on for professionally for the last year and I wouldn’t recommend it.

Main reason is the absurd amount of complexity with costs heavily outhweighting benefits gained from the solution.

For example, simple task of adding new entity consists off: On backend: creating migration, creating data entity (Ecto), writing structure module (sanitization, validation, basic logic, if needed), mounting queries, writing input mutation(s), writing output query/queries, unit testing On frontend: creating component, creating form, writing graphql queries, writing mutation, wrapping components/queries in components, connecting query to components, providing action handlers for inputs, unit testing, integration testing

Now I have authors list. And even though I am full stack I haven’t yet spent even single minute on having Proper UX Design set in place. Oh, do we need to add author’s birthdate? Dang, let me adjust all of that.

In my opinion technical debt accumulates faster than in other solutions. GraphQL is complex. React (done right) is complex. Apollo is complex (Elixir is simple, yet it’s only one cog). Deciding on doing file upload in GraphQL led me to a rabbit hole, which took at least a week to dug out from.

When trying to find the source of all development issues my thoughts go towards GraphQL. Maybe it is too complex or we didn’t had enough experience in it? Yet it was really nice to work with when all the backend and frontend boilerplate was already written. It makes sense, even though requires some heavy thought behind it. Maybe it’s Apollo, which locks in one of two specific trains of thought, or Absinthe, which requires side-hacks in order to get some bit more advanced features working with Apollo, like file uploads or query batching.

From a perspective I’d say this is just too much. Every single part of this stack adds overhead to the other parts. Maybe if I had more specialized team members it would get easier, but being only 3 of us, with me, as a full stack, constantly switching between all of that was a tiresome, low-productive effort. Right now we’re disassembling app to a classic REST-app and we’re seeing development speed increase on week-to-week basis, even though we had to scrap most of the frontend.

I guess there would be some benefit on having the write up of all of it, since it doesn’t even scratch the surface of the year of development issues with this stack, but even in this very "short" form it may serve for a word of warning that this stack is not necessarily something you want to care for.



Phoenix Live View[1] fits the gap between server rendered HTML pages and JavaScript rendered front-ends. If you’re after a responsive web UI without needing to learn so many disparate frameworks and technologies it could be a good fit.

[1] https://github.com/phoenixframework/phoenix_live_view


Phoenix Live View is very new and not appropriate at all for real-world use yet.


> Phoenix Live View is very new

Its technique is not new, it's just server-side rendering via websocket and DOM patching (Morphdom is also very old)

> not appropriate at all

"At some" would be appropriate. Pretty much typical CRUD (real world) that needs to fire requests to server anyway (e.g. form, business logic validation).

The only reason not to use it _right now today_ is it's not 1.0 yet.


I'm not one of those people who think a <1.0 version number is an absolute dealbreaker, but when the first commit is only 6 months old it's probably not the wisest move to make it a cornerstone of your stack.


We've also felt the complexity of React and Apollo. It works best when, as others mentioned, you've got distinct teams that can focus on each part. In situations where that isn't the case the same decouplings that make it easier for teams to operate independently just add overhead and complexity.

We're in a similar boat these days so in fact our latest projects are back to simple server side rendering, but we're still making the data retrieval calls with GraphQL. It ensures that that the mobile app and reporting tools we're also developing will have parity, and we don't need to write ad hoc query logic for each use case. The built in docs and validations have simply proven too useful to pass up, and you really don't need a heavy weight client to make requests.


> Maybe if I had more specialized team members it would get easier

It really would be easier. We are using this stack, and have separate backend and frontend devs. Each love their side of the stack. Being backend myself, I don't find the it too time consuming to get new entities going. However, I imagine if you were doing the whole stack and repeating ecto schema, absinthe schema, apollo queries, then it might get more tedious. I particularly enjoy how easy it is to modify the schemas once it's set up too. If we ever need to expose more data, it is usually done within minutes. There is a massive benefit in the forced standardization of GraphQL too. Being rigorous about standardizing APIs and how you do filtering, sorting, embedding, nesting and so on is tiring and a waste of time - you end up writing mini frameworks. Absinthe and GraphQL reduce this pain for us considerably.


That was one of the reasons I choose this stack. The plan was hiring team just after stack was set. Unfortunately financial plans toppled and we (2 full stacks + front end UX) were stuck with very complex architecture designed for ~10.

If I would be to working on only 1 part it would be great, but instead of parallelizing effort it was sequenced which kind of sucked.


It's early optimization at the architecture level. I've seen it happen so many times.

On top of all you said, the GraphQL stack is horrible for caching. You will not have this problem until you reach really high traffic, but once you do, it will eat you alive.

Unlike a rest endpoint, you can't cache a URL. You can't use HTTP headers. You don't know beforehand what GQL query will come and even with batch queries for different types it's super hard to optimize. It's very easy to have an N+1 query hell.

The lesson I learned is to use the boring stuff until it really needs to scale up. A REST API with static HTML and some sprinkles of JS will get you to the phase where you actually need to start using React, GQL, etc.

GQL trades a lot of things for flexibility, but 99% of the apps don't need that in the first place.

But hey, on the upside everyone can put in their resumes that they used all the new hot shit :)


> it's super hard to optimize. It's very easy to have an N+1 query hell.

Both Postgraphile and Hasura deal with this. I have no idea about Absinthe.


Similar experience but without GraphQL. We had server side rendering with a Node server. Our production server became a Node farm with Phoenix + PostgreSQL requiring less than 1 GB of RAM and Node using at least 8 extra GBs. We eventually ditched SSR, send the React app and wait for it to render. We're back to 1 core and (mostly unused) 4 GB. It's a business application with complicated UI, customers don't mind waiting a couple of seconds of they want to start from a bookmarked screen.

For a simple UI I'd generate HTML server side with eex and spare us the cost of front end development. It's also a productivity nightmare. The amount of work needed to add a single form field with React/Redux is insane.


Just a quick one: why would you need redux for forms? This is in my opinion a total overkill.

I have forms either having their own state or (preferred) just use Formik for all of this. In my stack, this then allows to just add a field in the GraphQL schema (backend), add it in the query, add the formik field + yup validation and done.


Some people would argue that if using Redux, also having local state logic is an anti pattern.

That would mean that if you use Redux, a form also requires actions for form update/submit/success/error and the form data should be stored in the redux store.

That is one of the main issues I have with Redux, which I feel adds automatic complexity for simple things, but at the same time I'm not sure if it's very good to have a mix of tings happening from store/actions/reducers and others from local state/ajax.


> Some people would argue that if using Redux, also having local state logic is an anti pattern.

I won't disagree that this is a popular opinion, but there's little practical benefit to storing state that's truly local to a single component (or a very small tree) in Redux just because it's there.

Even the maintainers of Redux maintain that it's perfectly acceptable to use local state - https://redux.js.org/faq/organizing-state#do-i-have-to-put-a...


I don't see how you can blame something for adding complexity based on what other people _think_ is an anti-pattern.

In fact, it's most of the times not desired to update your store before you know the data has been validated anyway. The store should always be the source of truth, but that also means that it should be valid.

That's the approach I am going with in any case when working with some kind of global state.


Abramov advises against using redux for forms


Or use the browser's built-in form state management: https://medium.com/@everdimension/how-to-handle-forms-with-j...

Bonus: it's almost certainly more accessible than custom solutions.


Any idea why SSR used so much RAM? I wonder if the virtual DOM approach of React contributed substantially to it, and whether something like Svelte (https://svelte.dev/) would do much better.


We investigated it a little then decided that finding a proper solution wasn't worth our time. This is what we discovered.

We had a pool of 4 Node instances which is the default of the solution we were using (a patched version of https://github.com/hassox/std_json_io)

Each Node instance has a 1 GB memory limit (the heap? It seems Java like). We failed to find a way to raise it but, again, we didn't invest too much into it except some googling. It seems there used to be a command line option for that but it doesn't work anymore.

Each hit to Node raises the memory usage until it gets to 1 GB and throws an error and gets recycled, which unfortunately translates to a 502/503 to the client. We can intercept those errors in Elixir and try again but it's far from ideal.

To have less errors we naively decided to increase the number of workers but we also had to increase the RAM of the server. The first hit for each client gets served by Node so eventually Node's resource usage dwarfed Elixir's. We felt like we were doing it wrong (I'm sure there is a way to get a saner setup) and decided to turn off server side rendering. Nobody complained and we're saving some $40/50 per month on that single server, plus our time which is worth more than that.

I think that projects with little load should run on low tech uncomplicated solutions: a reverse proxy and an application server were enough in the 90's for the same scenario and are still OK now.


What load did you have? Was SSR used in private routes as well?

In my experience, SSR should not introduce more complexity than you already have.


This the best part "Main reason is the absurd amount of complexity with costs heavily outhweighting benefits gained from the solution." Couldn't agree sooooooooo much ! thx xlii.

I will remember that one "Right now we’re disassembling app to a classic REST-app and we’re seeing development speed increase on week-to-week basis" . thx


Thanks for this. I'm not a web dev, but yes, after seeing the 3rd or 4th "and then we do this" I've just started scrolling towards the end of the post and got bored fairly quickly. My thoughts exactly: this seems way too complex for creating just a few pages; too many dependencies, too many things to remember and update.


I want to say Any sufficiently complicated REST API contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of GraphQL API.

Although complex, GraphQL (done right) is much easier than REST (done right).


The real key part of this phrase is "sufficiently complicated". GraphQL shines in certain scenarios, but not every API needs this complexity.


Unless I'm missing something, Hasura requires much less effort on the back-end than what you're describing with Absinthe. Hasura runs in its own process (or processes; it doesn't have any state of its own so it can scale), and it can deliver events to your back-end via webhooks, so it doesn't matter what language you use for the back-end.

As for file uploads, it seems to me that the best way to do that is to have the front-end upload directly to your cloud storage service. Both S3 and Google Cloud Storage have a feature called signed URLs, where your back-end can create an object, grant the necessary permission, then give the client a temporary URL to upload to that object. Then just store the URL in the database.


Most of the things I've seen people complain about being complex seems to be a lack of either understanding, proper tooling and in most times, the codebase itself.

Newer technology usually adds a ton of different patterns and abstractions that can hide a lot of things away from you, so it becomes hard to understand, unless this information is presented as a 101, or you invest all your time to read documentation about each individual part of everything. Which, to be frank, nobody has time for.

I have used GraphQL with Apollo and React for the last couple of years for different kinds of projects and what I have noticed is that the tools themselves, while quite abstracted, also try to support a lot of edge cases which can make it hard to apply to your project. I had to either use other libraries or build my own that helped me abstract some of the things that were most commonplace but required a lot of boilerplate otherwise.

I have found tremendous value by following the philosophy of not to pre-optimize for everything, until it becomes a burden to work with. Although, it can be hard to determine when something is going to become so big that it will be difficult if not impossible to refactor, but you learn that as you go.


I agree! You must not use all fancy new things to build your application, just because they exists.

We also took a step back and removed GraphQL-stack and use simple clean REST-Api only, it has increased our productivity and we don't have to divide our time for an other module, which must be maintained too. Before we used REST and GraphQL, becuase it make no sense to put everything into GraphQL.


I assume you're keeping Elixir/Phoenix as RESTful api, and then React on the front-end? I've heard Redux suffers from similar issues with complexity, so what will you do?


Yes. Elixir and Phoenix are wonderful and still my favorite language/framework and they are still backing things.

For front end I wanted to move back to Ember since I had a lot of positive experience with it but that was impossible due to the business requirements.

As for Redux I try to stay away from it utilizing philosophy “you don’t need it unless you know you need it”. Cargo cult made Redux default piece of stack and it’s another very complex and sensitive to mis-design piece.

Instead of using Redux we rolled our own adapter since we had to work with some data outside Apollo and it’s we are rather happy with it. Not sure how it will scale to RN though.


> Stack described above is the one I’ve been working on for professionally for the last year and I wouldn’t recommend it.

Any alternatives that you would like to suggest?


> * Right now we’re disassembling app to a classic REST-app*

still with elixir/phoenix or did you scrap even that part?


Nah. Elixir stays ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: