×

우리는 LingQ를 개선하기 위해서 쿠키를 사용합니다. 사이트를 방문함으로써 당신은 동의합니다 쿠키 정책.


image

Programming, Centralized Logging Solution for Google Cloud Platform (Cloud Next '18... (2)

Centralized Logging Solution for Google Cloud Platform (Cloud Next '18... (2)

So my first stop and first stop for many customers

is let's check out Error Reporting

to see if it's automatically identified anything for us.

And sure enough, having looked through our logs,

out of all of those errors, it's identified a specific error

in our application that someone tried

to set the quantity to less than zero,

which caused a specific error.

And I can click on that.

I can see exactly how many customers have

been impacted by this error.

I can see the stack trace so that I

can tell exactly where in my code just came from,

and I could go to Stackdriver Debugging

to investigate further, if I wanted to, from here.

I can also jump directly back to the logs to view the raw logs.

I can link to an issue, so this one here

links to a GitHub issue, for example.

And I can also track the resolution status.

So for example, if someone tells me one of our developers says,

hey, I fixed this error.

It should be all set.

It's deployed.

I can go ahead and say, this error should be resolved,

and then we say, OK, no known errors.

That's great.

If I go back to my catalog here--

and we'll test it to see if this actually works--

and I add something to my cart, let's see

what happens if we try to set the quantity

to a negative number.

We'll update the basket, and I'll

come back over to Error Reporting,

and reload it here for just a moment.

And this usually updates in about five to 10 seconds,

and we can see that it automatically

identified that the error had been seen again and reopened

to the issue.

If I wanted to, I can also turn on notifications

so that rather than me going to this dashboard,

it'll proactively push alerts for new or re-opened errors

to my inbox.

So that's a very common use case we

hear from a lot of developers who

are using Stackdriver Logging.

But we hear a number of other examples

for, like, security use cases.

So something that I hear a lot is,

I want to be alerted if anyone adds an email,

let's say from a Gmail.com domain.

So in the UI, I can interact with it

in sort of a point and click mode,

but I can also interact with the logs in the Advanced Filter,

and then type in more advanced queries.

So in the case of identifying something

that comes from a Gmail.com account,

I need to create a logs-based metric so that I can then

alert on that.

So I'm going to go over to my Logs-Based Metrics,

and I've created one here.

I'm going to go ahead and edit that metric.

For anything that comes from logs

that are set IAM policy, that binds a member to @gmail.com,

I want to count all of these.

And that's kind of the first step.

So I'll be able to have dashboards about everyone who's

been added from an @gmail.com account,

and then I can also create alerting policies on that.

And I can see that earlier this morning, my Google account

did add somebody who was @gmail.com.

So let me go back to our logs-based metrics

and go ahead and show you how you would create

an alerting policy on this.

So I'm going to Create Alert from Metric.

This will pop me over into Stackdriver Monitoring, which

is where I manage all of my dashboards

and my alerting policies.

So I'm able to, here, see the logs-based metric.

Take a look here.

I don't have any recent ones, but I'm

going to go ahead and create an alerting policy if I ever

see this, because I don't ever expect to see this.

And instead of duration, I'm going to say most recent value.

So if this ever happens, I want to be alerted.

Go ahead and save the condition, and create a notification.

So I'll send it to my favorite email address,

auditlogsrock@gmail.com.

I can add some documentation in terms of who to contact.

Name this policy.

And I'll go ahead and save it.

So now if I go ahead and add a new member to an IAM policy

anywhere in this project, I will receive

a notification about this.

Another common use case we hear from customers

is that they want to send their logs someplace else.

So this is the beauty of the centralized logging solution

is that it all comes in centrally,

and then we can slice and dice it and send it

to many different places using exports, which

is the log router that we were talking

about just a moment ago.

So I'll start out with a very common use

case, which is I want to send all of my audit logs

to BigQuery.

So I select a filter.

In this case, I'm going to say everything

that matches the activity audit logs in the log name.

And then I simply select BigQuery and the destination

and create a sink.

And any future logs that match this will automatically

be sent to BigQuery, which is great,

but that only helps me for this project.

What if I want to do this at the organization level?

So in this case, I need to pull up my Cloud Shell here.

And I can use a gcloud command to set this

at the organizational level.

And I'll call out right here.

We have "include children," and we're setting this up

at the organization level.

Same thing, though-- the log name

matches the filter of anything that

has cloudaudit.googleapis.com.

And I'm going to go ahead and save this,

and now any audit logs from anywhere

in my organization will all go to the same BigQuery

destination.

Now one tricky thing just to remember

is now that I set it up at the org level,

any audit logs that come from this project

will be written twice.

So in that case, I'd probably want

to go back and get rid of the one at the project level.

Another thing that we have users do

a lot is I want to act on logs.

So in this case, let's say every time a new VM is spun up,

I want to take a look at it, process it

with Cloud Functions, maybe, and add some labels

or apply some rules to it.

So I'm going to create a sink for any time

that I have created an instance, which is the insert command.

And I'll be able to send all of these to Pub/Sub.

I can then use a Cloud Function to pull all of the logs

from Pub/Sub, process them, take whatever action I want on them.

So this is another common use case we see.

And then last but not least, we'll

talk about log exclusions.

So we have a page dedicated to helping

you understand what your log volume is

across your various Google Cloud resources or AWS resources

here as well.

And I can see, for example, taking a look here,

a lot of my volume in this project

is coming from Kubernetes, so that's the bulk of my logs.

I can see, though, that projected

through the end of the month is 43 gigabytes, which is well

within the free limit of 50 gigabytes,

so I'm not too, too worried.

But I could go ahead and say, you know what?

I'm going to send these logs maybe to my Elk Stack.

I don't want to pay for them in Stackdriver.

I could go ahead and disable the logs altogether here,

or I could create an exclusion filter based

on this and, for example, say, anything that is less

than a warning level, I want to maybe just sample those logs.

So instead of excluding 100% of them,

maybe I will set this is 99%.

I can also deep dive into the logs volume in Stackdriver

Monitoring using the Metrics Explorer tool

and visualize exactly where my volume of logs is coming from.

And if we could cut back please to the presentation, awesome.

EDUARDO SILVA: So now with a solution like Stackdriver,

we can say that logging is not long and boring.

But before that, handling logs in the different formats,

many sources, and correlate data is quite hard.

And I'm sure if you are working already with distributed system

or cluster or with Kubernetes, there's

quite challenges that needs to be solved.

So what I'm going to explain now is

about how logging operates at Kubernetes level.

So if you understand the problem, how

it works behind the scenes, means

you can optimize the engine queries

and get better insights from your data.

Do we have any Kubernetes users here?

Oh, good.

So I'm going to do a little introduction

about how logging works in Kubernetes

or when you play with docking containers.

And basically, you have one application

that triggers a message, and that message

means to be like a log message saying a status, an alert,

a warning, or anything related.

For example, we have, like, "Hey Next."

But it's not just about the message.

That message has some metadata that needs to be correlated.

One of them is at what time this message was generated,

and the other is what's the channel that this message was

generated from.

If we speak about containers, there

are two main channels, standard output, standard error.

And here, for example, "Hey Next" is just not a next.

We have JSON example with metadata.

That message then needs to be stored somewhere.

There are many ways to store container logs

or in Kubernetes logs, if, for example, we have systemd.

But here we are going to refer how Docker operates.

Basically, your message is stored in the file system,

in a pod.

So every message becomes a new JSON map,

and every message is appended at the end of that file.

But things will become a little bit complex later,

because if we think about how Kubernetes

works from an architectural perspective,

we can think about this.

You have your application, the most simple use case.

That application is running in a container.

Container appends limitation restrictions

and allows you to set up certain policy rules

and also how this process that is running

can communicate with others.

And when I say the "container," you

know that container is just a concept, right?

From an operating system perspective,

it's all about namespaces and cgroups.

So your application runs in a container,

and that container runs in a concept, which is called a pod.

So things get complex, because a pod

can have multiple containers.

Multiple pods can be running on the same node.

And a node I'm referring to to a bare metal

machine or a BPEL machine.

So here is just one single machine, but in a real cluster,

you have many of them.

So imagine that you have your distributed application.

You told Kubernetes, please deploy my application.

Kubernetes, based on the scaling policies,

is going to decide where these containers will run

or will scale up.

It's going to create some replicas on which

node these replicas will run.

And likely, not all of them will run in the same node.

So these become complex.

If I tell you 20 years ago, please

look at my logs from my application, well,

you open the terminal.

You do some SSH and look what's going on with your application.

Just cut, file, and you get the message.

But on this, if you have a huge cluster,

you cannot do an SSH on every node and try to find which is

the local JSON file that belongs to the application that I just

deployed, but maybe that application will destroy it,

because it failed or it was scaled up.

So things become more complex, and complex

Learn languages from TV shows, movies, news, articles and more! Try LingQ for FREE

Centralized Logging Solution for Google Cloud Platform (Cloud Next '18... (2) Централизованный|||||облако|платформа|| Zentralisierte Protokollierungslösung für Google Cloud Platform (Cloud Next '18... (2) Solución de registro centralizado para Google Cloud Platform (Cloud Next '18... (2) Solution de journalisation centralisée pour Google Cloud Platform (Cloud Next '18... (2) Soluzione di registrazione centralizzata per Google Cloud Platform (Cloud Next '18) (2) Google Cloud Platform向けログ一元管理ソリューション(Cloud Next '18... (2) 구글 클라우드 플랫폼을 위한 중앙 집중식 로깅 솔루션(클라우드 넥스트 '18... (2) Gecentraliseerde loggingoplossing voor Google Cloud Platform (Cloud Next '18... (2) Scentralizowane rozwiązanie logowania dla Google Cloud Platform (Cloud Next '18... (2) Solução de registo centralizado para Google Cloud Platform (Cloud Next '18... (2) Централизованное решение для ведения журналов для Google Cloud Platform (Cloud Next '18... (2) Google Cloud Platform için Merkezi Günlükleme Çözümü (Cloud Next '18... (2) Централізоване рішення для ведення журналів для хмарної платформи Google (Cloud Next '18)... (2) Google Cloud Platform 的集中日志记录解决方案 (Cloud Next '18...(2) Google Cloud Platform 的集中記錄解決方案 (Cloud Next '18...(2)

So my first stop and first stop for many customers |||||||||клиентов

is let's check out Error Reporting

to see if it's automatically identified anything for us. ||||автоматически|определено|||

And sure enough, having looked through our logs, |||проверив||||журналы данных

out of all of those errors, it's identified a specific error |||||||||конкретная ошибка|

in our application that someone tried

to set the quantity to less than zero, |||установить количество меньше нуля||||

which caused a specific error. |вызвало|||

And I can click on that. ||могу|нажать на это||

I can see exactly how many customers have

been impacted by this error. |пострадал от ошибки|||

I can see the stack trace so that I |||стек трассировки|стек вызовов||чтобы||

can tell exactly where in my code just came from,

and I could go to Stackdriver Debugging |||||Стекдрайвер отладка|отладка в Stackdriver

to investigate further, if I wanted to, from here. |исследовать дальше|дальше изучить||||||

I can also jump directly back to the logs to view the raw logs.

I can link to an issue, so this one here |||||вопрос||||

links to a GitHub issue, for example.

And I can also track the resolution status. ||||||статус разрешения|

So for example, if someone tells me one of our developers says, ||||||||||разработчики|

hey, I fixed this error. ||исправил||

It should be all set.

It's deployed. |Развернуто.

I can go ahead and say, this error should be resolved, ||||||||||разрешена

and then we say, OK, no known errors.

That's great.

If I go back to my catalog here--

and we'll test it to see if this actually works--

and I add something to my cart, let's see

what happens if we try to set the quantity ||||||||количество

to a negative number.

We'll update the basket, and I'll

come back over to Error Reporting,

and reload it here for just a moment. |перезагрузите||||||

And this usually updates in about five to 10 seconds,

and we can see that it automatically

identified that the error had been seen again and reopened |||||||||повторно открыт

to the issue.

If I wanted to, I can also turn on notifications

so that rather than me going to this dashboard, ||||||||панель управления

it'll proactively push alerts for new or re-opened errors |активно||||||||

to my inbox.

So that's a very common use case we

hear from a lot of developers who

are using Stackdriver Logging.

But we hear a number of other examples

for, like, security use cases.

So something that I hear a lot is,

I want to be alerted if anyone adds an email,

let's say from a Gmail.com domain. ||||||домен Gmail.com

So in the UI, I can interact with it ||||||взаимодействовать с ним||

in sort of a point and click mode, |вроде режима "укажи и щелкни"||||||

but I can also interact with the logs in the Advanced Filter,

and then type in more advanced queries. |||||более сложные|запросы

So in the case of identifying something

that comes from a Gmail.com account,

I need to create a logs-based metric so that I can then

alert on that.

So I'm going to go over to my Logs-Based Metrics,

and I've created one here.

I'm going to go ahead and edit that metric.

For anything that comes from logs

that are set IAM policy, that binds a member to @gmail.com, ||||||связывает с|||||

I want to count all of these.

And that's kind of the first step.

So I'll be able to have dashboards about everyone who's

been added from an @gmail.com account,

and then I can also create alerting policies on that.

And I can see that earlier this morning, my Google account

did add somebody who was @gmail.com.

So let me go back to our logs-based metrics

and go ahead and show you how you would create

an alerting policy on this.

So I'm going to Create Alert from Metric.

This will pop me over into Stackdriver Monitoring, which |переместит меня|||||||

is where I manage all of my dashboards

and my alerting policies.

So I'm able to, here, see the logs-based metric.

Take a look here.

I don't have any recent ones, but I'm ||||недавних|||

going to go ahead and create an alerting policy if I ever

see this, because I don't ever expect to see this. ||||||ожидать|||

And instead of duration, I'm going to say most recent value. |||последнее значение|||||||последнее значение

So if this ever happens, I want to be alerted.

Go ahead and save the condition, and create a notification. |||||условие||||

So I'll send it to my favorite email address,

auditlogsrock@gmail.com.

I can add some documentation in terms of who to contact. ||||||условиях||||

Name this policy.

And I'll go ahead and save it.

So now if I go ahead and add a new member to an IAM policy

anywhere in this project, I will receive ||||||получу

a notification about this.

Another common use case we hear from customers

is that they want to send their logs someplace else.

So this is the beauty of the centralized logging solution |||||||||решение для централизованного логирования

is that it all comes in centrally, ||||||централизованно

and then we can slice and dice it and send it ||||||разделить на части||||

to many different places using exports, which

is the log router that we were talking

about just a moment ago.

So I'll start out with a very common use

case, which is I want to send all of my audit logs

to BigQuery.

So I select a filter. ||выбираю||

In this case, I'm going to say everything

that matches the activity audit logs in the log name.

And then I simply select BigQuery and the destination ||||||||пункт назначения

and create a sink. |||и создать раковину

And any future logs that match this will automatically

be sent to BigQuery, which is great,

but that only helps me for this project.

What if I want to do this at the organization level?

So in this case, I need to pull up my Cloud Shell here. |||||||открыть|||||

And I can use a gcloud command to set this

at the organizational level.

And I'll call out right here. ||позову|||

We have "include children," and we're setting this up

at the organization level.

Same thing, though-- the log name

matches the filter of anything that

has cloudaudit.googleapis.com.

And I'm going to go ahead and save this,

and now any audit logs from anywhere

in my organization will all go to the same BigQuery ||||||||тот же самый|

destination.

Now one tricky thing just to remember ||сложный момент||||

is now that I set it up at the org level, |||||||||организация|

any audit logs that come from this project

will be written twice.

So in that case, I'd probably want

to go back and get rid of the one at the project level.

Another thing that we have users do

a lot is I want to act on logs.

So in this case, let's say every time a new VM is spun up, ||||||||||||запущена|

I want to take a look at it, process it

with Cloud Functions, maybe, and add some labels |||||||метки

or apply some rules to it. |применить||||

So I'm going to create a sink for any time ||||||раковина для времени||любой момент|

that I have created an instance, which is the insert command. |||||||||вставка команды|

And I'll be able to send all of these to Pub/Sub. ||||||||||Паб/Саб|Публикация/Подписка

I can then use a Cloud Function to pull all of the logs

from Pub/Sub, process them, take whatever action I want on them.

So this is another common use case we see.

And then last but not least, we'll

talk about log exclusions. |||исключения из логов

So we have a page dedicated to helping |||||посвящённая||

you understand what your log volume is

across your various Google Cloud resources or AWS resources

here as well.

And I can see, for example, taking a look here,

a lot of my volume in this project

is coming from Kubernetes, so that's the bulk of my logs. |||||||основная часть|||

I can see, though, that projected

through the end of the month is 43 gigabytes, which is well до конца месяца||||||||||

within the free limit of 50 gigabytes,

so I'm not too, too worried.

But I could go ahead and say, you know what?

I'm going to send these logs maybe to my Elk Stack. |||||||||Elasticsearch Logstash Kibana|

I don't want to pay for them in Stackdriver.

I could go ahead and disable the logs altogether here, |||||отключить|||совсем|

or I could create an exclusion filter based |||||исключающий фильтр||

on this and, for example, say, anything that is less

than a warning level, I want to maybe just sample those logs. |||||||||взять образцы||

So instead of excluding 100% of them, |||исключая||

maybe I will set this is 99%.

I can also deep dive into the logs volume in Stackdriver |||глубоко анализировать|||||||

Monitoring using the Metrics Explorer tool ||||Исследователь метрик|

and visualize exactly where my volume of logs is coming from. |визуализировать||||объем древесины|||||

And if we could cut back please to the presentation, awesome.

EDUARDO SILVA: So now with a solution like Stackdriver,

we can say that logging is not long and boring.

But before that, handling logs in the different formats, |||обработка данных|||||

many sources, and correlate data is quite hard. |||соотносить данные|данные|||

And I'm sure if you are working already with distributed system |||||||||распределённая система|

or cluster or with Kubernetes, there's |кластер||||

quite challenges that needs to be solved.

So what I'm going to explain now is

about how logging operates at Kubernetes level.

So if you understand the problem, how

it works behind the scenes, means

you can optimize the engine queries ||||оптимизировать запросы движка|

and get better insights from your data.

Do we have any Kubernetes users here?

Oh, good.

So I'm going to do a little introduction |||||||введение

about how logging works in Kubernetes ||логирование в Kubernetes|||

or when you play with docking containers. |||||стыковка контейнеров|

And basically, you have one application

that triggers a message, and that message |||сообщение|||

means to be like a log message saying a status, an alert, ||||||Сообщение журнала|||||

a warning, or anything related. |Предупреждение|||связанный с этим

For example, we have, like, "Hey Next."

But it's not just about the message.

That message has some metadata that needs to be correlated. ||||метаданные|||||соотнесено

One of them is at what time this message was generated,

and the other is what's the channel that this message was

generated from.

If we speak about containers, there

are two main channels, standard output, standard error. |||||вывод||

And here, for example, "Hey Next" is just not a next.

We have JSON example with metadata.

That message then needs to be stored somewhere. ||||||сохранено|

There are many ways to store container logs

or in Kubernetes logs, if, for example, we have systemd.

But here we are going to refer how Docker operates. ||||||ссылаться на|||

Basically, your message is stored in the file system,

in a pod. ||в стручке

So every message becomes a new JSON map,

and every message is appended at the end of that file. ||||добавляется в конец||||||

But things will become a little bit complex later, |||||||сложнее|позже

because if we think about how Kubernetes

works from an architectural perspective,

we can think about this.

You have your application, the most simple use case.

That application is running in a container.

Container appends limitation restrictions |||ограничения лимитов контейнера

and allows you to set up certain policy rules ||||установить||определённые||

and also how this process that is running

can communicate with others.

And when I say the "container," you |||||контейнер|

know that container is just a concept, right?

From an operating system perspective,

it's all about namespaces and cgroups.

So your application runs in a container,

and that container runs in a concept, which is called a pod.

So things get complex, because a pod

can have multiple containers.

Multiple pods can be running on the same node. ||||запущены||||узел

And a node I'm referring to to a bare metal ||||ссылаюсь на|||||

machine or a BPEL machine. |||BPEL-машина|

So here is just one single machine, but in a real cluster,

you have many of them.

So imagine that you have your distributed application.

You told Kubernetes, please deploy my application. ||||развернуть||

Kubernetes, based on the scaling policies, ||||масштабирование|

is going to decide where these containers will run

or will scale up.

It's going to create some replicas on which |||||реплики||

node these replicas will run. ||реплики||

And likely, not all of them will run in the same node.

So these become complex.

If I tell you 20 years ago, please

look at my logs from my application, well,

you open the terminal.

You do some SSH and look what's going on with your application.

Just cut, file, and you get the message.

But on this, if you have a huge cluster,

you cannot do an SSH on every node and try to find which is

the local JSON file that belongs to the application that I just

deployed, but maybe that application will destroy it,

because it failed or it was scaled up. ||||||масштабировался|

So things become more complex, and complex