Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow ability to stop server. #83

Open
cowboy opened this issue Jan 17, 2014 · 17 comments
Open

Allow ability to stop server. #83

cowboy opened this issue Jan 17, 2014 · 17 comments
Labels

Comments

@cowboy
Copy link
Member

cowboy commented Jan 17, 2014

I'd like to add the ability to stop a currently-running server. I propose that:

  • the plugin keeps a cache of running server instances by target name
  • if the stop flag is specified and a server with the same target has been started, it will be stopped and removed from the cache
  • this behavior will happen automatically before the task starts a server (by default, disableable with an option)
// Stopping a server explicitly.
grunt.registerTask('do_something', ['connect:dev', 'something', 'connect:dev:stop']);

// Stopping a server implicitly (there would be "port in use" error here).
grunt.registerTask('do_something', ['connect:dev', 'something']);
grunt.registerTask('do_something_else', ['connect:dev', 'something_else']);
grunt.registerTask('do_everything', ['do_something', 'do_something_else']);

Additionally, I'd like to consider this:

  • in addition to a server-by-target cache, what if there is also a server-by-port cache, so that if you start a second server on the same port, it will kill whatever was running before it (by default, disableable with an option) before the task starts?
// Stopping a server by port, not target (dev and prod servers have the same port).
grunt.registerTask('do_something', ['connect:dev', 'something']);
grunt.registerTask('do_something_else', ['do_something', 'something_else', 'connect:prod']);

My use-case:

Let's say I need to run my dev server when I do my integration tests:

grunt.registerTask('test-integration', ['connect:dev', 'mochaTest']);

And let's say both my dev and prod server use the same port, but have otherwise different configuration.

grunt.initConfig({
  connect: {
    options: {
      port: 8000,
    },
    dev: {
      options: {
        base: ['prod', '.'].
      }
    },
    prod: {
      options: {
        keepalive: true,
        base: ['prod'].
      }
    }
  }
})

What happens when I want to run my integration tests before doing my production build, which includes starting the prod web server?

grunt.registerTask('prod', ['test-integration', 'build_tasks', 'connect:prod']);

Because I haven't stopped the dev server (because I can't) at the end of test-integration, the prod server can't run because the port is already in use. And it's not practical to change the port between the two just to work around this issue.

Thoughts?

@cowboy
Copy link
Member Author

cowboy commented Jan 17, 2014

@shama
Copy link
Member

shama commented Jan 17, 2014

Just a heads up with keeping the servers in a cache, it will only work with watch with spawn: false enabled. The other option to have it work with both watch modes is ping the server to check if it's active and have the server provide an POST endpoint to call so the server stops itself. Or maybe there is a better solution, maybe cluster?

@cowboy
Copy link
Member Author

cowboy commented Jan 17, 2014

Man, i REALLY wish spawn: false was the default.

@sindresorhus
Copy link
Member

@shama have you thought about using the vm module as an alternative to spawning?

@shama
Copy link
Member

shama commented Jan 17, 2014

I'm fine switching the default to spawn: false. I just don't want to deal with the support issues it will incur when users don't understand why their modules are bleeding into each other, especially while running test suites. I already get spawn: false issues opened quite a bit with it not being the default.

I'm open to trying vm. It's marked unstable. I've noticed that any core node module marked unstable is aptly so. But still any solution that involves sharing the context won't fix the issue of the contexts affecting each other.

@sindresorhus
Copy link
Member

@shama it is rewritten in 0.12, so might be more stable now, dunno. isn't the whole point of the vm module that they don't share context?

@shama
Copy link
Member

shama commented Jan 17, 2014

Grunt is still >= v0.8. If we used vm within unshared contexts then it would have the same issue where a task couldn't cache server instances upon subsequent runs.

@shama
Copy link
Member

shama commented Jan 17, 2014

The best solution, IMO, is to pull out the task running parts of Grunt and just have the watch use that rather than it's own wrapper around the current system.

@eddiemonge
Copy link
Contributor

Another use case for this, if I am reading it right, is that you can have the server restart when changes are made to the Gruntfile. Watch can trigger the server to stop and then restart with the newly loaded Gruntfile. Unless you can do this already and I am in the dark about it.

@shama
Copy link
Member

shama commented Mar 20, 2014

This can be done, currently the server just needs to provide a way to halt itself rather than storing variables within a single process context.

FWIW, the watch in grunt-next doesn't spawn by default.

@soundasleep
Copy link

I'm running parameterised tests and the inability to stop a connect means I've had to move the connect task out to a parent task, and the child tasks can't be run separately anymore.

Ideally we could have something like connect:connect followed by connect:disconnect optionally.

@jec006
Copy link

jec006 commented Jul 22, 2014

Broadcasting an event as a part of done so that we could enqueue the next task only once the server had stopped would work also I think. (for my use case at least - I want to run a set of e2e tests in angular and restart the server with new config each time).

Edit actually because of the way that grunt is treating the async task this won't work 👎

@c0bra
Copy link

c0bra commented Aug 22, 2014

👍

@chchrist
Copy link

I need to be able to stop the server too. It seems that the pull request never made it. Any workaround?

@nykolaslima
Copy link

This issue is still opened? Any current solution on that?

@masiorama
Copy link

I could use a feature like that too.

@judsonmusic
Copy link

judsonmusic commented May 26, 2016

I have this...
connect: { server: { options: { port: 9001, base: '', open: true, keepalive: true } } },

So how do I disconnect each time before connecting?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.