PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0
Showing posts with label tornado. Show all posts
Showing posts with label tornado. Show all posts

Wednesday, September 21, 2022

[FIXED] How to create a virtual host for a Tornado HTTP server

 September 21, 2022     python, tornado, virtualhost     No comments   

Issue

I want to redirect a random local domain i.e. http://mypage.local to http://localhost/:8888 where I am running a tornado HTTP server that delivers the website. I got all the information from the official docs here. Code see below (main.py).

I also added the following line to my /etc/vhosts file:

127.0.0.1:8888       mypage.local

But trying to open http://mysite.local results in a classical "Page not found" error. What do I do wrong?

main.py:

from tornado.ioloop import IOLoop
from tornado.web import RequestHandler, Application, url

class MainHandler(RequestHandler):
    def get(self):
        self.write("<p>Hello, world</p><p><a href='/story/5'>Go to story 5</a></p>")

class StoryHandler(RequestHandler):
    def get(self, story_id):
        self.write("this is story %s" % story_id)

def make_app():
    return Application([
        url(r"/", MainHandler),
        url(r"/story/([0-9]+)", StoryHandler)  
    ])

def main():
    app = make_app()
    app.add_handlers(r"mypage.local", [
        (r"/story/([0-9]+)", StoryHandler),
    ])    
    app.listen(8888)
    IOLoop.current().start()


if __name__ == '__main__':
    main()

Solution

You should edit /etc/hosts file, but it doesn't support port forwarding. So you can write:

127.0.0.1       mysite.local

And access your server by http://mysite.local:8888

You can run tornado on 80 port as root, but it would be better to use nginx to forward requests to tornado:

server {
  listen 80;
  server_name mysite.local;

  location / {
    proxy_pass  http://127.0.0.1:8888;
    include /etc/nginx/proxy.conf;
  }
}


Answered By - Eugene Soldatov
Answer Checked By - Mary Flores (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, September 20, 2022

[FIXED] Why does an async consumer called in __init__ in a Tornado RequestHandler behave differently from statically called?

 September 20, 2022     asynchronous, consumer, python, tornado     No comments   

Issue

I'm trying to make an async server using Tornado with a unique queue for each handler. A job is placed into a queue when the end point is called. I have a consumer function which asynchronously 'consumes' jobs from the queue. However, the behavior of consumer tends to vary depending on whether I call it as self.consumer() or AsyncHandler.consumer(). My initial guess is that it is because of instance level locking but can't find evidence for it. I fire 4 post requests consecutively. Here are the 2 snippets with their outputs.

import tornado.web
from tornado import gen
from time import sleep, time
from tornado.queues import Queue
from concurrent.futures import ThreadPoolExecutor
from tornado.ioloop import IOLoop

class AsyncHandler(tornado.web.RequestHandler):

    JOB_QUEUE = Queue()
    EXECUTOR = ThreadPoolExecutor()

    def post(self):
        job = lambda: sleep(3) or print("{}:handler called".format(int(time())))
        self.JOB_QUEUE.put(job)
        self.set_status(200)
        self.finish()

    @staticmethod
    @gen.coroutine
    def consumer():
        while True:
            job = yield AsyncHandler.JOB_QUEUE.get()
            print("qsize : {}".format(AsyncHandler.JOB_QUEUE.qsize()))
            print(AsyncHandler.JOB_QUEUE)
            output = yield AsyncHandler.EXECUTOR.submit(job)
            AsyncHandler.JOB_QUEUE.task_done()


if __name__ == "__main__":
    AsyncHandler.consumer()
    APP = tornado.web.Application([(r"/test", AsyncHandler)])
    APP.listen(9000)
    IOLoop.current().start()

This gives the expected output:

qsize : 0
<Queue maxsize=0 tasks=1>
1508618429:handler called
qsize : 2
<Queue maxsize=0 queue=deque([<function...<lambda> at 0x7fbf8f741400>, <function... <lambda> at 0x7fbf8f760ea0>]) tasks=3>
1508618432:handler called
qsize : 1
<Queue maxsize=0 queue=deque([<function AsyncHandler.post.<locals>.<lambda> at 0x7fbf8f760ea0>]) tasks=2>
1508618435:handler called
qsize : 0
<Queue maxsize=0 tasks=1>
1508618438:handler called

output = yield AsyncHandler.EXECUTOR.submit(job) takes 3 seconds to return output and so the outputs arrive at delay of 3 seconds. Also we can see the queue build up in the meanwhile.

Now to the interesting piece of code:

import tornado.web
from tornado import gen
from time import sleep, time
from tornado.queues import Queue
from concurrent.futures import ThreadPoolExecutor
from tornado.ioloop import IOLoop

class AsyncHandler(tornado.web.RequestHandler):
    JOB_QUEUE = Queue()
    EXECUTOR = ThreadPoolExecutor()

    def __init__(self, application, request, **kwargs):
        super().__init__(application, request, **kwargs)
        self.consumer()

    def post(self):
        job = lambda: sleep(3) or print("{}:handler called".format(int(time())))
        self.JOB_QUEUE.put(job)
        self.set_status(200)
        self.finish()

    @staticmethod
    @gen.coroutine
    def consumer():
        while True:
            job = yield AsyncHandler.JOB_QUEUE.get()
            print("qsize : {}".format(AsyncHandler.JOB_QUEUE.qsize()))
            print(AsyncHandler.JOB_QUEUE)
            output = yield AsyncHandler.EXECUTOR.submit(job)
            AsyncHandler.JOB_QUEUE.task_done()


if __name__ == "__main__":
    APP = tornado.web.Application([(r"/test", AsyncHandler)])
    APP.listen(9000)
    IOLoop.current().start()

The output weirdly (and pleasantly) looks like:

qsize : 0
<Queue maxsize=0 tasks=1>
qsize : 0
<Queue maxsize=0 tasks=2>
qsize : 0
<Queue maxsize=0 tasks=3>
qsize : 0
<Queue maxsize=0 tasks=4>
1508619138:handler called
1508619138:handler called
1508619139:handler called
1508619139:handler called

Note that now we're calling consumer inside __init__. We can see the tasks build up and execute in parallel (without a queue build up), completing almost simultaneously. It's as if output = yield AsyncHandler.EXECUTOR.submit(job) is not blocking on the future. Even after a lot of experimentation I'm unable to explain this behavior. I'd really appreciate some help.


Solution

The first application has only one running consumer because it's executed once. Each request "blocks" (only one at a time is the loop) the consumer, so the next one will be processed after the previous.

The latter app starts a new consumer loop with each request (since RequestHandler is created per req). So the first request does not "block" the next one, 'cause there is brand new while True with get and submit...



Answered By - kwarunek
Answer Checked By - David Goodson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, September 1, 2022

[FIXED] How to run a Luigi server against a custom URI

 September 01, 2022     luigi, nginx, nginx-reverse-proxy, tornado     No comments   

Issue

Can the Luigi server be run against http://localhost:8082/someString ?

Here is just one keyword convenient way to do the same in Dash. I was hoping to see a similar way in Luigi.


Solution

So I figured a way around on my own. First of all, there does not seem to be an external way of doing that. Only way I could do it is by modifying a this line in luigi/server.py:

299     handlers = [
300         (r'/api/(.*)', RPCHandler, {"scheduler": scheduler}),
301         (r'/someString', RootPathHandler, {'scheduler': scheduler}),

Then, curl -L http://localhost:8082/someString works fine.



Answered By - tash
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Friday, June 24, 2022

[FIXED] How to make nginx on different port balance to different ports?

 June 24, 2022     nginx, proxy, reverse-proxy, tornado     No comments   

Issue

I am listening on port 8080 with nginx and balance that on four tornado instances on ports 8081, 8081, 8083 and 8084 with nginx.conf bellow. How to force nginx to listen another port 8090 and balance that on port 8091, 8092, 8093 and 8094 ? Tornado instances running on [808*] are different than [809*]

8080 balance on [8081, 8082, 8083, 8084]
8090 balance on [8091, 8092, 8093, 8094]

there is nginx.conf

worker_processes 16;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;


#master_process off;
daemon off;

events {
    worker_connections 1024;
    use epoll;
}

http {
    charset utf-8;

    # Enumerate all the Tornado servers here
    upstream frontends {
        server 127.0.0.1:8081;
        server 127.0.0.1:8082;
        server 127.0.0.1:8083;
        server 127.0.0.1:8084;
    }

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    proxy_buffering off;
    proxy_buffers 4 512k;
    proxy_buffer_size 512k;

    access_log /var/log/nginx/access.log;

    #keepalive_timeout 65;
    #proxy_read_timeout 200;

    keepalive_timeout 600;
    proxy_connect_timeout       600;
    proxy_send_timeout          600;
    proxy_read_timeout          600;
    send_timeout                600;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    gzip on;
    gzip_min_length 1000;
    gzip_proxied any;
    gzip_types text/html text/plain text/css application/javascript application/x-javascript text/javascript text/xml application/xml;

    # Only retry if there was a communication error, not a timeout
    # on the Tornado server (to avoid propagating "queries of death"
    # to all frontends)
    proxy_next_upstream off;

    server {
        listen 8080;
        server_name localhost;

        location /
        {
            access_log off;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        # Allow file uploads
        client_max_body_size 50M;


        location ^~ /static/ {
            root /home/server/;
            if ($query_string) {
                expires max;
            }
        }

    }
}

Solution

You need two upstream and two server blocks. Something like this (with other options filled in as before):

http {
  upstream eighties {
    server 127.0.0.1:8081;
    server 127.0.0.1:8082;
  }
  upstream nineties {
    server 127.0.0.1:8091;
    server 127.0.0.1:8092;
  }
  server {
    listen 8080;
    location / {
      proxy_pass http://eighties;
    }
  }
  server {
    listen 8090;
    location / {
      proxy_pass http://nineties;
    }
  }
}


Answered By - Ben Darnell
Answer Checked By - Senaida (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Older Posts Home

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
All Comments
Atom
All Comments

Copyright © PHPFixing