Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Root Epic stops listening if an error occurs as a result of outputted action #591

Open
JamesPlayer opened this issue Nov 14, 2018 · 12 comments

Comments

@JamesPlayer
Copy link

Do you want to request a feature or report a bug?
Bug

What is the current behavior?
If a js error occurs as the result of an emitted action from a redux-observable epic then it stops all epics from listening to new actions. This is pretty nasty since on the front-end it still appears as if the app is working but in the background nothing is getting saved.

I tried catching the error in my epic but for some reason it doesn't catch errors that are caused by the resulting emitted action.

I thought that upgrading to version 1.0.0 redux-observable could fix the issue but unfortunately it remains.

The issue is different than this issue since the error is getting triggered in a reducer that occurs after the epic has emitted it's output redux action so catching it with .error in the chain of observables doesn't work.

If the current behavior is a bug, please provide the steps to reproduce and a minimal demo of the problem using JSBin, StackBlitz, or similar.

https://stackblitz.com/edit/redux-observable-playground-qtughj?file=ping-pong.js

Unfortunately errors in Stackblitz result in the whole page going grey and displaying the error message which isn't helpful if you want to display behaviour post-error occurring, and the redux-observable demos on jsbin no longer work, so you'll have to take my word for it that if the PING button was still visible after the js error occurred and you pushed it then nothing would happen because the rootEpic will have completely stopped listening for new actions.

What is the expected behavior?
I would expect the error triggered in the reducer to be caught by the error operator or at the very least not stop the entire rootEpic from listening to new actions.

Which versions of redux-observable, and which browser and OS are affected by this issue? Did this work in previous versions of redux-observable?
All versions.

I've already posted something on Stack Overflow unfortunately with no responses: https://stackoverflow.com/questions/53164276/redux-observable-cant-catch-error-from-emitted-action-error-stops-action-strea

@jayphelps
Copy link
Member

Thank you for posting to Stack Overflow first! I've been too busy lately to monitor them personally, but the effort is super super appreciated regardless as many questions can indeed be answered by the community.

The issue you describe is known, but I'm still unsure what we should in fact do so I've defaulted to not doing anything cause this would be a breaking change and I wanna be confident in what ever we choose.

I'm very interested to hear your thoughts.

It won't ever be caught by your epics because the errors flow down the observer chain in Observables, just like next'd values do, unlikely traditional exceptions which flow "up" the callstack. So the catch() (catchError in v6) operator catches errors produced from the source you apply it to, not to any observer subscribing to it.

There are a number of probably-less-than-ideal ways to handle this situation currently. The most obvious being a custom middleware that catches the errors produced by your reducers before they reach the epicMiddleware.

https://stackblitz.com/edit/redux-observable-playground-ra5mcp?file=configureStore.js

const store = createStore(rootReducer,
  applyMiddleware(epicMiddleware, store => next => action => {
    try {
      next(action);
    } catch (e) {
      setTimeout(() => {
        throw e;
      });
    }
  })
);

This isn't ideal however, because now no one can catch these exceptions from where they originally called store.dispatch(). While this doesn't apply to your particular example because PONG is dispatched async (because of the delay(1000)) if it wasn't it would have propagated all the way back to where you dispatched PING. e.g. here I removed the delay and caught the exception: https://stackblitz.com/edit/redux-observable-playground-gm18pk?file=index.js I have seen one redux-observable app in the wild that has relied on this, though I'll be the first to admit it's probably rare. If we did catch and rethrow errors async, any middleware placed before epicMiddleware wouldn't receive them, which in rare cases might be problematic (e.g. an error logging middleware I've seen many times would need to be placed after not before epicMiddleware which is a footgun)

Another argument was made a couple years back that we can't assume that an app is always in a safe enough state still to continue if any arbitrary exception happens inside your reducers or any middleware following epicMiddleware. Though an app's developer could decide that's a safe assumption or accept the "risks". These days I'm not entirely convinced redux-observable needs adhere to that philosophy and I'm betting a majority of times not shutting down your epics would be ideal far more often than any "risks".

When I previously thought about this the best I could come up with was maybe providing a callback hook that people could override to add their own dispatch handler which could also include a try/catch if they wanted for these cases but it ultimately would look nearly identical to just providing your own middleware like the example above.

We could have some sort of option to just do this setTimeout rethrow for you, but it's kind of a difficult concept to explain and I imagined it would be very easily confused with other error handling that was unrelated. Not sure what I'd name it either, as it isn't just reducer errors, it's errors from anything downstream from the middleware, which could be another middleware or indeed the reducers. So catchReducerErrors: true or whatever would be misleading.

This all might seem pedantic--and it is--but libraries like these have very fundamental consequences to assumptions, just like how my assumption with this is having on you. I'm definitely interested in providing a solid solution to this though, either changing the library or even just providing clear documentation.

I'd also be interested if anyone else has thoughts on this, particularly any objections to redux-observable rethrowing? There is no way to prevent the error from terminating your epics without rethrowing the error async (or swallowing it entirely, but that's not on the table)

@jayphelps
Copy link
Member

FWIW it appears the exact same behavior happens in @ngrx/effects, which is a library nearly identical to redux-observable for ngrx (which is like redux). I was hoping they'd have a great solution we could steal. In fact, ngrx stops working entirely when this happens, so any further dispatches are ignored.

Ultimately, the reason this hasn't been a big deal in practice for either libraries is because reducers should be pure functions which never throw. Without a doubt they should not explicitly throw, however of course there are unintentional exceptions like simple variable typos that could sneak into production and cause the app to "crash" with this behavior so I'm still interested in solutions.

@evertbouw
Copy link
Member

Since the actual problem is an exception in your reducer I think it should also be handled there. You can wrap your root (and/or slice) reducer in a try/catch and handle it how you see fit.

The epics crashing is annoying tho I'd like to avoid those 🤔

@jayphelps
Copy link
Member

@evertbouw that's a great point I didn't think of, you can indeed also add try/catch in your root reducer to catch these. Did you have any preference on how redux-observable should handle these or do you prefer the status quo to any alternative?

@JamesPlayer
Copy link
Author

@evertbouw @jayphelps thanks for your responses. As @jayphelps pointed out it's not only the reducers that can be the source of these downstream epic-terminating errors, it could be any unforeseen error in middleware too. I can't really assume that no errors will ever occur in middleware, so wrapping the root reducer in a try/catch wouldn't completely put my mind at ease, I'd still need to add something else to handle errors in the middleware too.

IMO making sure that epics can still listen out for new actions after an error occurs is the most important thing, so baked-in downstream error handling would be my preference, perhaps with the ability to turn it off via something like catchDownstreamErrors: false?

In the meantime adding some custom error handling middleware solves the problem, thank you very much!

@bjornicus
Copy link

it also seems that upstream errors that are not handled properly (e.g. forgotten catchError in a pipe in one of the epics in an app) will terminate the root epic. This was unexpected to me, and unfortunate. I ended up cooking up a wrapper around combineEpics so that if one of them terminates with an error the rest will keep functioning:

function protect(epic) {
  return (action$, state$, dependencies) => epic(action$, state$, dependencies).pipe(catchError(error => of({ type: 'EPIC_FAILURE', error })));
}
export function combineAndProtectEpics() {
  const protectedEpics = Array.from(arguments).map(epic => protect(epic));
  return combineEpics(...protectedEpics);
}

With that though, the failing epic will be done; what I'd really like is for even it to stay alive, but I don't see a good way of doing that.

@evertbouw
Copy link
Member

@bjornicus you can return the second argument your error handler receives. This is the observable the error was caught in.

catchError((error, caught) => merge(
	of({ type: 'EPIC_FAILURE', error }),
	caught,
)),

The observable will lose any internal state so I wouldn't do this with my root epic, but for individual epics this should be fine.

@bjornicus
Copy link

oh neat, that worked. Clever but non-obvious until you really think hard about what it's doing with the observables you produce (presumably just merging them all?). I wonder if this would be useful to add to the "error handling recipes in the documentation?

@sekhavati
Copy link

This past week I hit a similar issue where it appeared the root epic had stopped listening. I’ll try to give a brief overview of what was occurring in our situation. The root cause was due to an epic in the codebase returning an Observable where one of the values emitted turned out to be undefined instead of an action, eg:

action$.pipe(
  ofType(SOME_ACTION),
  …do some stuff…,
  mergeMap(data => of(undefined, createActionA(data), createActionB(data)))
);

The undefined being set on the returned Observable in our case wasn't quite as blatant as I've depicted above, it actually came from calling a function which didn't return anything.

Anyhoo, in another piece of middleware elsewhere (not an epic) we had a line like this:

const isRelevantAction = ({ type }) => RELEVANT_ACTIONS.includes(type);

export default store => next => action => {
    if (isRelevantAction(action)) {

Where this blew up when it was trying to destructure type from action when it was undefined.

Now I know we shouldn't have been in this situation in the first place, only actions should have been spat out of the epic. However this was a codebase I inherited, so it is what it is and it did happen unfortunately.

This error seemed to cause the root epic from listening to subsequent actions somehow, which is the most alarming thing for me.

I'm very interested to hear your thoughts.

One thought I have had @jayphelps is I wondered if redux-observable could check what's coming out of epics and try verify they contain action like objects (ie: test they have at least a type property)? Then warn or error if it finds something to the contrary. Maybe this is naive thinking on my part though...

@tylerweitzman
Copy link

@bjornicus you can return the second argument your error handler receives. This is the observable the error was caught in.

catchError((error, caught) => merge(
	of({ type: 'EPIC_FAILURE', error }),
	caught,
)),

The observable will lose any internal state so I wouldn't do this with my root epic, but for individual epics this should be fine.

@evertbouw the value caught is my entire tree of epics (I am running catchError on the root). How do identify from this which one instigated the error? There is a key "hasError: false" in the tree but seems unrelated as none of the epics have a true value for the key

@punksta
Copy link

punksta commented Jun 6, 2019

I faced with the same problem.

Following recommendations here I've wrapped root reducer and applied a new middleware(should be applied last).

in showAppCriticalErrorAlert I am showing an alert with a friendly message and then crashing the app using async error rethrow.

export const showAppCriticalErrorAlert = (
	error?: Error | string = "critical error"
) => {
	Alert.alert(
		getAlertTitle(),
		getAlertMessage(),
		[
			{
				text: "ok",
				onPress: () => {
					killApp(error);
				}
			}
		],
		{
			cancelable: false
		}
	);
};

const killApp = (error: Error | string) => {
	setTimeout(() => {
		throw error;
	}, 0);
};

export const catchErrorMiddleware = () => (next: any => any) => (
	action: any
) => {
	try {
		next(action);
	} catch (e) {
		showAppCriticalErrorAlert(e);
	}
};

export const catchErrorReducerWrapper: ReducerWrapper<any, any> = reducer => {
	return (state, action) => {
		try {
			return reducer(state, action);
		} catch (e) {
			showAppCriticalErrorAlert(e);
		}
	};
};

it works for all critical errors in reducers from react-redux and all middleware (thunks and redux-observable in my case).

also I have catchError in most of epics, so code above handles only really critical errors.

@jamesmfriedman
Copy link

It took me quite a bit to stumble upon the same solution that @evertbouw has. IMO, that should DEFINITELY be in the documentation since it's a lot easier (and cleaner) to catch errors at the top level of each epic rather than having to worry about an inner observable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants