Running all the way, guarding the whole process, and successfully completing the security task of Qingdao Marathon in 2023.

April 22nd at 7:30.

2023 Qingdao Marathon

Start shooting at Hong Kong Middle Road.

Running together, the momentum is like a rainbow.

Overwhelming cheers and shouts.

Ring over the island city

At the moment, the island city shows us.

Real speed and passion

The cheers of the whole people resounded through the sky

I don’t know if you have noticed.

What about them waiting behind them?

A purplish blue.

Stand tall and energetic.

With a standardized, rigorous, meticulous and in-place working attitude.

Good mental outlook and duty image.

Go all out to escort the marathon!

Complete marathon security work for high standards and high quality.

The North Public Security Bureau carefully deployed and scientifically deployed the police.

Strictly and meticulously implement various security measures.

On the day of the game, more than 460 police officers were put into operation.

There are more than 1,150 security guards.

Shibei public security is here.

The whole race is 42.195 kilometers.

Responsible area in

With high quality and efficient service.

Effectively ensure the safe and orderly progress of the event.

Make careful preparations before the game

In order to ensure the safe and smooth running of the event.

"Marathon" Security of Shibei Public Security Bureau

Opened the curtain early.

The leaders of the sub-bureau went deep into the supervision and inspection of the security preparations for the marathon route, inspected the main intersections along the route on the spot, and got a detailed understanding of the police arrangement at the points along the route, requiring the auxiliary police on duty to closely cooperate with the staff, strengthen the linkage, and go all out to do a good job in the security of the marathon.

Build a safety wall for the competition

Spectators and athletes entered the stadium one after another.

They stick to their posts and stand ready for battle.

At various key points

Command and dispatch, patrol duty, and guide the masses

Escort the players

Build a safety wall for the competition

The whole process of guarding is not lax.

The players on the field made a wonderful "appearance"

They ran out of passion.

Ran out of the acceleration

Let’s cheer for them together

Husband, wife, escort

In the carnival of runners

With their silent protection.

They are both guardians of the stadium.

Another contestant in the competition

There is no shortage in the stadium.

The figure of the police fighting bravely for the first place

They compete with the players in the same competition, but also with the police on duty.

Together to protect the safety of the track

Guard all the way to the finish line

At the end of the field

Recording the moment of athletes.

It also carries the scene.

Guardianship and responsibility of all public security, civilian and auxiliary police

The publicity department of Shibei Public Security Bureau should do a good job in security photography of this marathon.

All the civilian auxiliary police on duty arrived at the security point at 6 o’clock in the morning.

By 11 o’clock, the audience and athletes were dismissed one after another.

Police officers of Shibei Public Security Bureau

Draw a successful conclusion to this event.

Photography | Hu Xiaoyang Zong Xiaoxiang

Editor | Liu Ling

Audit | Hu Xiaoyang

Envy female fans while taking photos to find C Ronaldo’s signature, the president reminded: Never mind the phone and take the jersey.

Live broadcast: On March 11th, in the 20th round of Saudi Arabia, Riyadh lost 0-1 to Jeddah, giving up the top spot.

After the game, Cristiano Ronaldo signed autographs for the fans. A female fan was very excited and took a jersey while taking photos. Cristiano Ronaldo warmly reminded: "Never mind the mobile phone and take the jersey." Then the female fan shouted "Hala Madrid".

Technical password of generated image AI

In the past few years, artificial intelligence (AI) has made great progress, and AI’s new products include AI image generator. This is a tool that can convert input statements into images. There are many AI tools for text-to-image conversion, but the most prominent ones are DALL-E 2, Stable Diffusion and Midjourney.

DALL-E 2 is developed by OpenAI and the project of chatgpt is complementary. It generates images through a paragraph of text description. Its GPT-3 converter model trained with more than 10 billion parameters can interpret natural language input and generate corresponding images.

DALL-E 2 mainly consists of two parts-converting user input into a representation of an image (called Prior), and then converting this representation into an actual photo (called Decoder).

The text and images used in it are embedded in another network called CLIP (Contrast Language-Image Pre-training), which is also developed by OpenAI. CLIP is a neural network that returns the best title for the input image. What it does is the opposite of what DALL-E 2 does-it converts images into text, while DALL-E 2 converts text into images. The purpose of introducing CLIP is to learn the connection between visual and text representation of objects.

DALL-E 2′ s job is to train two models. The first one is Prior, which accepts text labels and creates CLIP image embedding. The second is Decoder, which accepts CLIP image embedding and generates images. After the model training is completed, the reasoning process is as follows:

  • The input text is converted into CLIP text embedding using neural network.

  • Use Principal Component Analysis to reduce the dimension of text embedding.

  • Create an image embedding using text embedding.

  • After entering the Decoder step, the diffusion model is used to embed the image into an image.

  • The image is enlarged from 64×64 to 256×256, and finally enlarged to 1024×1024 by using convolutional neural network.

Stable Diffusion is a text-to-image model, which uses CLIP ViT-L/14 text encoder and can adjust the model through text prompts. It separates the imaging process into a "diffusion" process at runtime-starting from the noisy situation, gradually improving the image until there is no noise at all, and gradually approaching the provided text description.

Stable Diffusion is based on Latent Diffusion Model(LDM), which is a top-notch text-to-image synthesis technology. Before understanding the working principle of LDM, let’s look at what is diffusion model and why we need LDM.

Diffusion Models, DM) is a generation model based on Transformer, which samples a piece of data (such as an image) and gradually increases the noise over time until the data cannot be recognized. This model tries to return the image to its original form, and in the process, it learns how to generate pictures or other data.

The problem of DM is that powerful DM often consumes a lot of GPU resources, and the cost of reasoning is quite high due to Sequential Evaluations. In order to train DM on limited computing resources without affecting its quality and flexibility, Stable Diffusion applies DM to powerful Pre-trained Autoencoders.

On this premise, the diffusion model is trained, which makes it possible to achieve an optimal balance between reducing complexity and preserving data details, and significantly improves the visual reality. The cross attention layer is introduced into the model structure, which makes the diffusion model a powerful and flexible generator and realizes the high-resolution image generation based on convolution.

Midjourney is also a tool driven by artificial intelligence, which can generate images according to the user’s prompts. MidJourney is good at adapting to the actual artistic style and creating images with any combination of effects that users want. It is good at environmental effects, especially fantasy and science fiction scenes, which look like the artistic effects of games.

DALL-E 2 uses millions of image data for training, and its output results are more mature, which is very suitable for enterprises to use. When there are more than two characters, the image generated by DALL-E 2 is much better than that generated by Midjourney or Stable Diffusion.

Midjourney is a tool famous for its artistic style. Midjourney uses its Discord robot to send and receive requests for AI servers, and almost everything happens on Discord. The resulting image rarely looks like a photo, it seems to be more like a painting.

Stable Diffusion is an open source model that everyone can use. It has a good understanding of contemporary art images and can produce works of art full of details. However, it needs to explain the complex prompt. Stable Diffusion is more suitable for generating complex and creative illustrations. However, there are some shortcomings in creating general images.